Database Developer's Guide With Visual C++ 4 Second Edition
Database Developer's Guide With Visual C++ 4 Second Edition
This book emphasizes optimization of database design; implementing Access SQL for Access, dBASE, Paradox, and Btrieve databases. And provides in-depth coverage of the networking issues surrounding databases.
q
q q
CD-ROM includes a data dictionary application, crosstab query generator, multiform graphical front end, DDE and OLE applications with Excel, DAO applications Gives in-depth coverage of Windows database programming techniques Teaches all of Visual C++'s Data Access Object features, including Jet Database Access engine
Who Should Read This Book? What's New in This Edition - I -Visual C++ Data Access - 1 -Positioning Visual C++ in the Desktop Database Market - 2 -Understanding MFC's Data Access Classes - 3 -Using Visual C++ Data Access Functions - II -Database and Query Design Concepts
file:///H|/0-672-30913-0/INDEX.HTM (1 of 3) [14/02/2003 03:05:22 ]
Table of Contents
- 4 -Optimizing the Design of Relational Databases - 5 -Learning Structured Query Language - 6 -The Microsoft Jet Database Engine - 7 -Using the Open Database Connectivity API - 8 -Running Crosstab and Action Queries - III -An Introduction to Database Front-End Design - 9 -Designing a Decision-Support Application - 10 -Creating Your Own Data Access Controls - 11 -Using the New Win32 Common Controls - 12 -Printing Reports with Report Generators - IV -Advanced Programming with Visual C++ - 13 -Understanding MFC's DAO Classes - 14 -Using MFC's DAO Classes - 15 -Designing Online Transaction-Processing Applications - 16 -Creating OLE Controls with Visual C++ 4 - 17 -Using OLE Controls and Automation with Visual C++ Applications - 18 -Translating Visual Basic and Visual Basic for Applications Code to Visual C++ - V -Multiuser Database Applications - 19 -Running Visual C++ Database Applications on a Network - 20 -Creating Front Ends for Client-Server Databases - 21 -Interacting with Microsoft Mail, MAPI, and TAPI VI - Distributing Production Database Applications
file:///H|/0-672-30913-0/INDEX.HTM (2 of 3) [14/02/2003 03:05:22 ]
Table of Contents
- 22 -Documenting Your Database Applications - 23 -Creating Help Files for Database Applications - 24 -Creating Distribution Disks for Visual C++ Applications - A -Resources for Developing Visual C++ Database Applications - B -Naming and Formatting Conventions for Visual C++ Objects and Variables - C -Using the CD-ROM Installing the CD-ROM More Books
vcgifc.htm
s s
What Will You Need to Use This Book? What Will You Gain from This Book?
You will need experience with Visual C++ or one of the traditional PC programming languages for Windows, such as Microsoft or Borland C or C++, Turbo Pascal for Windows, or the Windows version of SmallTalk.
vcgifc.htm
You will need Microsoft Visual C++, running on Microsoft Windows 95 or Microsoft Windows NT with appropriate hardware and sufficient resources. Prior experience in basic database design would be helpful.
The latest professional techniques in front-end and database design. An arsenal of real-world solutions to speed your development. Troubleshooting methods to put a stalled project back on track. Detailed explanations of the theories behind the practical solutions.
vcgfm.htm
s s s s s s
s s s s s
Database Developer's Guide with Visual C++ 4, Second Edition Copyright 1996 by Sams Publishing Acknowledgments About the Authors Introduction What's New in Visual C++ for Database Developers s New Features of Visual C++ A Database Developer's View of Visual C++ Visual C++ and Microsoft BackOffice Who Should Read This Book What You Need to Use This Book Effectively How This Book Is Organized s Part I s Part II s Part III s Part IV s Part V s Part VI s Appendixes Conventions Used in This Book s Key Combinations and Menu Options s Visual C++ Code, SQL Statements, and Source Code in Other Languages s Entries in Initialization and Registration Database Files s Visual C++ Code Examples and Code Fragments s Prefix Tags for Data or Object Type Identification A Visual C++ and Database Bibliography s Introductions to Visual C++ Programming s Visual C++ Books for Developers s A Book on the Microsoft Jet Database Engine s The Primary Guide to SQL-92 s Publishers of Database Standards Keeping Up to Date on Visual C++ s Periodicals s The MSDN Support Product s Microsoft Internet Services
vcgfm.htm
Expanded coverage of ANSI and Access SQL queries via Visual C++ 4 code. Additional techniques for OLE automation. Information on programming with the new OLE containers. Coverage of OLE Custom Controls, which let you add new features to Visual C++ applications with minimal programming effort. Information on OLE Custom Controls for 32-bit environments. Techniques for using the new data access objects that Microsoft added to Visual C++ to position the product as a direct competitor to Visual Basic. These new techniques make Visual C++'s support of Access, FoxPro, and Paradox for Windows in the desktop database market even more complete. Detailed information on the redistributable 32-bit Microsoft Jet 3.0 database engine, which offers substantially improved performance compared to the 16-bit jet engine. Coverage of Visual C++'s built-in MFC classes, along with AppWizard, to let you quickly create a form to display database information with little or no Visual C++ code. Included is a sample program that actually has no programmer-written code at all. Information on the Microsoft ODBC Administrator application, included with Visual C++, which lets you connect to the Microsoft and Sybase versions of SQL Server and to Oracle client-server relational database management systems. Extensive coverage of building Visual C++ 4 front ends to interface with client-server RDBMSs. Clear examples that illustrate significant development topics, such as crosstab queries, action queries, transaction processing, and record locking. Programming examples and techniques for using Sockets and MAPI services.
vcgfm.htm
201 West 103rd Street, Indianapolis, Indiana 46290 This book is dedicated to Kathareeya "Katie" Tonyai, a brand new granddaughter. Peter Hipson This book is dedicated to the memory of my father, George H. Jennings, Structural Engineer. Roger Jennings
vcgfm.htm
Composed in AGaramond, Optima, Helvetica, and MCPdigital by Macmillan Computer Publishing Printed in the United States of America All terms mentioned in this book that are known to be trademarks or service marks have been appropriately capitalized. Sams Publishing cannot attest to the accuracy of this information. Use of a term in this book should not be regarded as affecting the validity of any trademark or service mark.
Acknowledgments
The authors are indebted to Neil Black of Microsoft's Jet Program Management Group and Stephen Hecht of Microsoft's Jet Development Group for their "Jet Database Engine 2.0 ODBC Connectivity" white paper. This white paper made a substantial contribution to the writing of Chapter 7, "Using the Open Database Connectivity API." You can find this white paper and a number of other works by these authors on the Microsoft Development Library CD. Search for the authors' names. Special thanks to Microsoft and Steve Serdy at Microsoft Developer Support for their excellent assistance. Steve's help with the Win32 Common Controls was most valuable. Thanks are also due to Robert Bogue and Jeff Perkins, the technical editors for this book. Jeff's help in getting our references to Microsoft SQL Server 6 up-to-date was most valuable. Thanks also to Grace Buechlein, our acquisitions editor. Grace's professionalism was most valuable when we had problems, and we hope to work with her again. Thanks to Michael Watson, our development editor, for his extra work in making sure that the book presented the very latest information available. Thanks also to Gayle Johnson, our production editor, and to Anne Owen, our copy editor.
vcgfm.htm
microcomputers since the mid-1970s. He also has many years of experience with IBM mainframes. He is the author of STARmanager, a GIS-type application that assists sales and marketing managers in managing their resources. You may contact Peter via CompuServe (70444,52) or the Internet ([email protected]). Roger Jennings is a consultant specializing in Windows database, multimedia, and video applications. He was a member of the Microsoft beta-test team for Visual C++ 2.0; Visual Basic 2.0, 3.0, and 4.0; the Professional Extensions for Visual Basic 1.0; Visual Basic for DOS; Microsoft Access 1.0, 1.1, 2.0, and 7; Word for Windows 2.0; the 32-bit versions of Word and Excel; Microsoft Project 4.0 and 4.1; Windows 3.1 and Windows 95; Windows for Workgroups 3.1 and 3.11; Windows NT 3.5, 3.51, and 4.0 Workgroup and Server; the Microsoft ODBC 2.0 and 2.5 drivers; Video for Windows 1.1; and Multimedia Viewer 2.0. Roger is the author of Sams Publishing's Database Developer's Guide with Visual Basic 4, upon which much of this book is based. He also wrote Access 2 Developer's Guide and Access 95 Developer's Guide, both from Sams Publishing, and two other books on Microsoft Access, as well as books devoted to Windows 95, Windows NT, and desktop video production with Windows 95 and Windows NT 3.5x. He also is a contributing editor for Fawcette Technical Publications' Visual Basic Programmer's Journal. Roger has more than 25 years of computer-related experience, beginning with his work on the Wang 700 desktop calculator/computer. He has presented technical papers on computer hardware and software to the Academy of Sciences of the former USSR, the Society of Automotive Engineers, the American Chemical Society, and a wide range of other scientific and technical organizations. He is a principal of OakLeaf Systems, a Northern California software consulting firm. You may contact him via CompuServe (70233,2161), the Internet ([email protected]), or the Microsoft Network (Roger_Jennings).
Introduction
The release of Visual C++ clearly shows that Microsoft is taking the lead in creating C++ development platforms. Visual C++ 4 continues to build on the development platform that the C/C++ program has established. MFC, with versions 2.0, 2.5, 3.0, and 4, offers the C++ programmer an advanced object-oriented methodology to develop applications that are easy to develop and maintain. Microsoft's OLE and the Common Object Model (COM) are now firmly entrenched as the new compound document standard for Windows, and OLE Automation replaces DDE as the primary means of interapplication communication. Huge system resource consumption by OLE megaservers, typified by Excel and Word, limited the adoption of OLE Automation in commercial database front ends. Windows 95 and Windows NT 3.51 have overcome most resource limitations when running 32-bit OLE applications. Thus, 32-bit Visual C++ programs are likely to be the glue that binds industrial-strength solutions orchestrating members of 32-bit Microsoft Office with OLE Automation. Out-of-process OLE Automation servers, especially the big ones, are not renowned for their speed. Fortunately, there's a trend toward a Pentium on every power-user's desktop, so faster hardware comes to the rescue again. OLE Custom Controls (OLE controls), which are in-process OLE Automation servers, don't suffer from the performance hit associated with the Lightweight Remote Procedure Calls (LRPCs) required by out-of-process servers. Thus, OLE controls typically are as quick as VBXs, which OLE controls replace in the 32-bit versions of Windows.
vcgfm.htm
Microsoft's addition of the Microsoft Jet database engine's DAO interface to MFC makes Visual C++ a strong competitor to the principal players in the desktop database market: Access, Visual Basic, FoxPro for Windows, Lotus Approach, and Borland International's dBASE and Paradox for Windows. The Open Database Connectivity (ODBC) application programming interface (API), introduced in Visual C++, made Visual C++ a major factor in client-server front-end development throughout the world. DAO promises to lead Visual C++ 4 into areas that in the past have been the private domain of Visual Basic. Using 32-bit ODBC resolves the controversy in the computer press regarding the relative performance of DBLib and ODBC with Microsoft SQL Server. Tests show that 32-bit ODBC is as fast as or faster than equivalent calls to DBLib functions. The three major sections of this Introduction describe the new database connectivity features of Visual C++ and show how Visual C++ fits into the database front-end and back-end market.
Features that are of interest only to database developers Features that affect all Visual C++ developers, regardless of whether they use Visual C++'s Data Access Objects
First, it is very important to realize that programmers today use many different versions of Visual C++.
q
The first version of Visual C++, 1.0, was available in both a 16-bit and a 32-bit edition. Both of these products are now out-of-date and aren't used much by professional programmers. The next major release of Visual C++, 1.5, was made available only in a 16-bit version. This version included MFC 2.5 and other improvements. The 16-bit versions of Visual C++ that followed Visual C++ 1.5 include 1.51 through 1.52. Version 1.52c is distributed with Visual C++ 4. Microsoft distributes both versions together to give developers of 32-bit applications access to the 16-bit development tools. The 1.5x versions of the 16-bit Visual C++ products incorporate minor changes and additions, such as the inclusion of the OLE controls development platform. The second major release of the 32-bit version of Visual C++ was Visual C++ 2. This product, released in the fall of 1994, supports only 32-bit applications. A redesigned development environment, MFC 3, and other features made Visual C++ 2 a major player in the 32-bit C/C++ development arena. The next release of the 32-bit version of Visual C++ was Visual C++ 4. This product, released in the fall of 1995, supports only 32-bit applications. MFC 4, DAO, numerous improvements to the developer environment, and other features make Visual C++ 4 a must-have for any serious Windows developer.
vcgfm.htm
q
A subscription program was offered to Visual C++ programmers who needed regular updates to Visual C++. Originally slated to be delivered every quarter, releases have been coming out about twice a year. The current subscription program promises the "next two releases," which would represent about a year in time. Microsoft has announced that Visual C++ 4.1 should be available in the spring of 1996. The subscription version of Visual C++ is available from Microsoft directly and from most resellers.
There will be more releases of Visual C++ as the years go by. However, it is beyond the scope of this book to speculate what the future holds for Visual C++. The following sections describe the categories of Visual C++ features.
The most apparent change in Visual C++ 4 for database programmers is the adoption of MFC 4 and the addition of the MFC Data Access Object (DAO) classes. Additionally, Windows 95's Rich Text common control has been integrated into MFC. The following list briefly describes the most important differences between Visual C++ 4 and earlier versions of Visual C++:
q
Visual C++ 1.5x develops 16-bit applications, whereas Visual C++ 4 is a 32-bit (only) development platform. Visual C++ 4 uses a much more advanced development environment. The integration of Visual C++ 4's components is much tighter than it was with Visual C++ 2. There have been major improvements to Visual C++ 4's ClassWizard, too. This lets the programmer have docked windows, more flexible toolbars, and other ease-of-use features. Visual C++ 4 offers MFC 4 and supports 32-bit OLE. Visual C++ 4 has integrated the functionality of the AppStudio program (which is used to edit program resources) into Visual C++'s development environment. There are improved toolbar editing and design tools, and the graphics editor is much better. For programmers who are just moving up to the 32-bit version of Visual C++, there is no need to switch between Visual C++ and AppStudio while developing applications. Visual C++ 4 can edit resources in applications. You can use this functionality, missing from Visual C++ 2, when running Visual C++ 4 under Windows NT. Visual C++ applications support toolbars, status bars, dialog bars, and floating palettes. Toolbars are dockable and have optional tooltip support built-in. Visual C++ supports 32-bit OLE Custom Controls, more commonly called OLE controls in this book. Because VBX controls can't be used in 32-bit environments, OLE controls are your only custom control choice when creating 32-bit Visual C++ 4 applications. Third-party VBX publishers have created a wide
vcgfm.htm
array of OLE controls, with functionality similar to their most popular VBX controls.
q
Visual C++ 4 comes with a number of useful OLE controls, including the ever-popular Grid control, and a number of sample controls that can be rewritten if you like. Customization lets you alter Visual C++'s user interface by adding menu choices, modifying toolbars, and manipulating other elements of the design environment. Visual C++ 4 lets you configure the editor to emulate either the Brief or Epsilon editors. Visual C++ 4 supports remote debugging using either a network connection or serial ports. However, there is still no support for dual-monitor debugging. You can create out-of-process OLE miniservers (OLE applets, typified by Microsoft Graph 5.0) that you can use with any application that can act as an OLE Automation client, including Visual C++. Microsoft calls these miniservers LOBjects (line-of-business objects). You can run multiple instances of Visual C++ in order to test your newly created OLE applet.
vcgfm.htm
PAL of Paradox. xBase and PAL, however, all have their roots in the original Dartmouth BASIC. The C programming language has a structure similar to PL/I or PASCAL. Thus, you might find the structure of Visual C++ applications similar to the xBase or PAL programs you're now writing. If only C++ were as simple as C! C is, for the most part, easy to use, but when many programmers are exposed to C++ for the first time, they find it to be totally different than C. Nothing could be further from the truth, however. C++ is easy to learn and adapt to if you remember that the original C language is a subset of C++. The quickest conversion from C to C++ is to simply rename the file. Simply choosing Visual FoxPro, dBASE for Windows, or Paradox for Windows because you're accustomed to writing xBase or PAL code isn't likely to be a viable long-term solution to your Windows database development platform dilemma (notwithstanding John Maynard Keynes' observation that "in the long term, we are all dead"). If you create Windows applications for a living, either as an in-house or independent developer, you're expected to provide your firm or client with applications that incorporate today's new technologies. You need to prepare now for OLE, with its in-place activation and OLE Automation (OA), and Visual C++ OLE controls. Windows is where the action is. More than 60 million copies of Windows (a mixture of Windows 3.x, Windows 95, and Windows NT) give Microsoft the marketing clout to make OLE, ODBC, and OLE controls the "standards" of the desktop computers of the world (whether the "industry" agrees or not). The alternative vaporware standards proposed by groups of software vendors organized to combat the Microsoft behemoth are very unlikely to replace OLE, ODBC, and OLE controls in the foreseeable future. Windows desktop database applications present a challenge to developers accustomed to writing a thousand or more lines of code to create character-based RDBMS applications in Visual Basic, xBase, PAL, C, or other programming languages. You can create a very simple but usable Visual C++ database application with AppWizard and very little Visual C++ code. One example in this book contains no programmer-written code at all. Microsoft Access offers code-free capabilitiesif you don't consider Access macros to be code. You'll need to write substantial amounts of code to create a usable database application with dBASE, Visual FoxPro's xBase, or Paradox's ObjectPAL. However, the reality is that you have to write a substantial amount of code to create a commercial-quality production database application with any of these products. The issue isn't how much code you have to write, but in which language you will write the code. Here are some of the language issues that will affect your career opportunities or the size of the numbers on your 1099s:
q
xBase and PAL are yesterday's most prevalent desktop database programming languages. Will xBase and ObjectPAL ultimately survive the onslaught of Microsoft's present Object Basic dialects, such as Visual Basic and the Visual Basic for Applications (VBA) variants of Microsoft Word 7, Access 7, and Excel 7? Visual C++ lets you write incrementally compiled database applications that don't require helper libraries such as VBRUN300.DLL. Borland's Delphi offers fully compiled .EXEs but uses a variation of Pascal as its programming language. Does Delphi's Pascal really stand a chance of replacing C++ as the preferred language for writing commercial Windows applications or of making Object Basic/VBA extinct as an application programming (macro) language? Visual C++ 4, Access 7, and Visual Basic 4 act as containers for OLE Custom Controls. Excel, Word, and Project also support OLE controls. Will many of the new applications systems that will be released in the next few years also support OLE controls?
vcgfm.htm
Visual C++ 4 allows cross-platform development for a number of different platforms, including applications for Macintosh computers. Microsoft now has a Macintosh version of Visual C++ that enables Windows developers to easily port their Windows applications to the Macintosh. Most development done under Windows NT can be directly ported to Windows 95 with minimum modifications.
This book doesn't purport to answer these questions directly, but you're likely to reach your own conclusions before you finish reading it. At the time this book was written, Visual C++, Visual Basic 4, and Access were the major database development platforms to fully support OLE, OLE Automation, and OLE controls. Visual C++ is the only development platform, other than Visual Basic, that lets you create your own OLE Automation miniservers. Chapter 16, "Creating OLE Controls with Visual C++ 4," and Chapter 17, "Using OLE Controls and Automation with Visual C++ Applications," describe how OLE, OLE Automation, and OLE controls fit into your decision-support database front ends.
NOTE Access 2 was released before the OLE Custom Control specification was finalized and many months before the retail release of Microsoft Visual C++ 2.0's Control Development Kit (CDK), which developers need in order to implement OLE controls. Access 2's OC1016.DLL isn't compatible with the final version of commercial OLE controls designed for use with Visual C++, which use OC25.DLL. The Access 2 Service Pack updates Access 2.0 to accommodate 16-bit OLE controls based on Visual C++ 1.5's OC25.DLL. If you find that you need to work with Access 2, you should keep these restrictions in mind. Access 7 doesn't present these problems.
Whatever language you ultimately choose, you must adapt to the event-driven approach to application design, inherited from the Windows graphical user interface (GUI). You also need to face the fact that Windows applications won't perform with the blazing speed of your Clipper or FoxPro applications running directly under DOS. Few, if any, Windows applications can match their DOS counterparts in a speed contest, but this situation is likely to change when you run your 32-bit database front end under Windows NT on a high-powered RISC workstation. Fortunately, most Windows users have grown accustomed to the sometimes sluggish response of Windows. It's possible, however, to design Visual C++ client-server front ends that rival the performance of their character-based counterparts. Chapter 15, "Designing Online Transaction-Processing Applications," and Chapter 20, "Creating Front Ends for Client-Server Databases," provide examples of "plain vanilla" front ends that deliver excellent performance.
vcgfm.htm
When this book was written, Microsoft Office (both 4.2x and Office 95) had garnered more than 80 percent of the Windows productivity application suite (front-end) market. Microsoft Office 4.2 includes Excel 5.0, Word 6.0, PowerPoint 4.0, and a Microsoft Mail 3.2 client license. Microsoft Office 95 includes Excel 7, Word 7, and PowerPoint 7, all in 32-bit versions that run under both Windows 95 and Windows NT. The Professional Versions of Microsoft Office also include Microsoft Access (Office 4.3 contains Access 2.0, while Office 95 includes Access 7). Encouraged by the success of Office, Microsoft introduced its server (back-end) suite, BackOffice, in the fall of 1994. Microsoft BackOffice comprises a bundle of the following server products:
q
Microsoft Windows NT 3.51 Server, the operating system on which the other components of BackOffice run as processes. Microsoft SQL Server 6 , a client-server RDBMS. Will be upgraded to Microsoft SQL Server 6.5 in the spring of 1996. Microsoft SNA Server 2.11, which provides connectivity to IBM mainframes and AS/400 series minicomputers via IBM's System Network Architecture. Microsoft Systems Management Server (SMS) 1.1, which helps you distribute software and track client hardware and software. Version 1.2 is scheduled to be released to beta sites sometime in 1996. Microsoft Exchange Server, which integrates e-mail, group scheduling, electronic forms, and groupware applications on a single platform that can be managed with a centralized, easy-to-use administration program. It's designed to make messaging easier, more reliable, and more scalable for businesses of all sizes. Microsoft Mail Server 3.5, a file-sharing e-mail system. Will be upgraded to Microsoft Exchange Server when it becomes available in the spring of 1996.
Like Microsoft Office, you get a substantial discount (of about 40 percent) from the individual server license prices when you purchase the BackOffice bundle. Unlike earlier versions of Windows NT Server and SQL Server, which were available in "Enterprise" versions with unlimited client licenses, BackOffice doesn't include client licenses. The commercial success of BackOffice, is by no means assured. It's not likely that large numbers of major corporations, the target market for BackOffice, will adopt this bundle until final versions of all of its promised components are delivered, which might not happen until 1997. Microsoft SQL 6.0 and Exchange Server use OLE and OLE Automation pervasively. The Messaging API 1.0 (Extended MAPI), on which Exchange is based, uses Messaging OLE objects, and Schedule+ 7.0 has its OLE/Schedule+ object collections. The development tools for Microsoft SQL Server 6.0 RDBMS and Exchange e-mail system use VBA. (Exchange Server is a nonrelational database optimized for messaging services.) Ultimately, all of the members of the BackOffice suite are likely to offer VBA extensions for customization. Microsoft has positioned Visual C++ as a development platform for "building solutions" based on BackOffice servers. Visual C++ developers stand to gain a huge new revenue base writing database
file:///H|/0-672-30913-0/vcgfm.htm (11 of 23) [14/02/2003 03:05:27 ]
vcgfm.htm
Visual C++ developers who want to take maximum advantage of Visual C++'s database connectivity to create high-speed, production-grade graphic front ends for a variety of desktop and client-server databases. Access developers who have found that they need more control over their data display and editing forms than is afforded by the present version of Microsoft Access. Visual C++ database applications also consume far fewer Windows resources than equivalent Access applications. Visual Basic developers who want to take advantage of Visual C++'s automated access to the Windows APIs, gain function callback capability, and manipulate pointers. Developers of character-based DOS database applications whose clients or organizational superiors have decided to migrate from DOS to Windows applications. Users of xBase or Paradox products who need to create industrial-strength, 32-bit database front ends running under Windows 95 or Windows NT 3.51. (Windows NT 3.51 might not yet be a major player in the operating systems numbers game, but firms that adopt Windows NT 3.51 are major employers of database consultants.) Programmers who would like to develop database applications by expending less than 25 percent of the time and effort required to create equivalent applications with C and C++. Those addicted to C++ can quickly create prototype database applications with Visual C++. (It's amazing how many prototype Visual C++ applications become production database front ends.) Victims of the corporate downsizing revolution, principally COBOL, PL/I, or FORTRAN programmers who need to acquire C, C++, and Windows database development skills to remain gainfully employed. Users of proprietary GUI front-end development applications for client-server databases who are tired of forking over substantial per-seat licensing fees for each client workstation that is attached to the server. Chief information officers (CIOs) or management information services (MIS) executives who need to make an informed decision as to which Windows front-end generator their organization will adopt as a standard. Others who are interested in seeing examples of commercially useful Visual C++ database applications that earn developers a comfortable or better-than-comfortable living.
vcgfm.htm
This book assumes that you have experience with Visual C++ or one of the traditional PC programming languages for Windows, such as Microsoft or Borland C or C++, Turbo Pascal for Windows, or the Windows version of SmallTalk. This book doesn't contain an introduction to C/C++ programming techniques; many excellent tutorial and references books are available to fill this need. (The bibliography that appears later in this Introduction lists some of the better books and other sources of information for beginning-to-intermediate-level C/C++ programmers.) Instead, this book begins with an overview of how Visual C++ fits into the desktop and client-server database market and proceeds directly to dealing with data sources in Visual C++. The entire content of this book is devoted to creating useful Visual C++ database applications, and most of the examples of Visual C++ code involve one or more connections to database(s). All the code examples in this book, except for minor code fragments, are included on the accompanying CD. Sample databases in each of the formats supported by the Access 7 database engine are provided. Some of the sample databases are quite large, so you can use their tables for performance comparisons. Tips and notes based on the experience of database developers with Visual C++ and Access appear with regularity.
NOTE If you don't have Excel or Word for Windows, it might be well worth the investment to get the Microsoft Office Professional Edition, which includes Word
file:///H|/0-672-30913-0/vcgfm.htm (13 of 23) [14/02/2003 03:05:27 ]
vcgfm.htm
Developers of commercial database applications with Visual C++ are likely to want the additional features offered by third-party, data-aware custom controls. As these controls become available (WinWidgets/32 is an example), they can save the programmer substantial effort in developing applications. Although Microsoft has co-opted the data-aware grid, combo box, and list box OLE control market by providing 16- and 32-bit OLE control versions of these controls with Visual C++ 4, many third-party publishers offer quite useful enhancements to Microsoft's set. Several third-party custom controls are used to create the sample applications in this book. Sources of these OLE controls are provided in Appendix A, "Resources for Developing Visual C++ Database Applications." Most Visual C++ database applications for Windows 3.1+ and Windows 95 will perform satisfactorily on 80386DX/33 or faster computers with 8M or more of RAM. If you plan to use Access, you should have a minimum of 16M of RAM. If you plan to take full advantage of OLE and OLE Automation, 12M to 16M of RAM is recommended, regardless of which applications you will be running. All of the 32-bit versions of the sample applications in this book run satisfactorily under Windows NT Workstation 3.51 with 16M of RAM, and on a Windows 95 80386DX/33 with 8M of RAM. All development work was done on a Pentium 90 with 32M of RAM, a machine of acceptable performance. The authors recommend running Visual C++ 4 with at least 16M of RAM on a fast 486 (or better) processor.
Part I
Part I, "Visual C++ Data Access," introduces you to Visual C++'s capabilities as a Windows database application development environment. Chapter 1, "Positioning Visual C++ in the Desktop Database Market," analyzes the features that Visual C++ offers database developers and shows how the language fits into Microsoft's strategy to dominate the desktop and client-server database development markets. Chapter 2, "Understanding MFC's ODBC Database Classes," provides a detailed description of how you create and manipulate Visual C++ MFC classes and collections using Access .MDB databases. Chapter 3, "Using Visual C++ Data Access Functions," covers the ODBC API-level SQL...() functions and presents some functions that use the ODBC API and that can be called from both C and C++ programs.
vcgfm.htm
Part II
Part II, "Database and Query Design Concepts," deals with relational database design and shows you how to use SQL to create SELECT and action queries that employ the Access database engine and the ODBC API to process the queries. Chapter 4, "Optimizing the Design of Relational Databases," shows you how to normalize data in order to eliminate data redundancy in your application. Chapter 5, "Learning Structured Query Language," discusses ANSI SQL-89 and SQL-92 and tells how Access SQL differs from the "standard" SQL used by client-server and mainframe databases. Chapter 6, "The Microsoft Jet Database Engine," provides insight on the use of the Jet ODBC drivers with xBase, Paradox, and Btrieve tables. It also introduces you to Microsoft's Data Access Objects (DAO), a technology newly incorporated into Visual C++ 4.0. Chapter 7, "Using the Open Database Connectivity API," shows how Visual C++ applications interface with ODBC drivers. Chapter 8, "Running Crosstab and Action Queries," advances beyond simple SQL SELECT queries and shows you how to write queries that include TRANSFORM, PIVOT, INTO, and other less commonly used SQL reserved words that modify the data in your tables.
Part III
Part III, "An Introduction to Database Front-End Design," is devoted to creating commercial-quality decisionsupport front ends for databases. Chapter 9, "Designing a Decision-Support Application," describes the principles of converting raw data into easily comprehensible information that can be displayed on Visual C++ forms. Chapter 10, "Creating Your Own Data Access Controls," shows you how to take advantage of OLE Custom Controls. Chapter 11, "Using the New Win32 Common Controls," gives examples of using Visual C++ with the new Win32 Common Controls in programs that will work in both Windows 95 and Windows NT. Chapter 12, "Printing Reports with Report Generators," shows you how to design reports and how to seamlessly integrate report generation with your database applications.
Part IV
Part IV, "Advanced Programming with Visual C++," takes you deeper into the realm of commercial database application development. Chapter 13, "Understanding MFC's DAO Classes," shows you the new MFC 4 implementation of the Data Access Object interface to the Microsoft Jet database engine, providing a reference to the DAO classes. Chapter 14, "Using MFC's DAO Classes," presents a practical tutorial dealing with the DAO classes. Chapter 15, "Designing Online Transaction-Processing Applications," describes how to design forms for heads-down, high-speed data entry and how to use Visual C++'s transaction-processing reserved words to speed bulk updates to tables. Chapter 16, "Creating OLE Controls with Visual C++ 4," explains how to develop practical OLE controls. Chapter 17, "Using OLE Controls and Automation with Visual C++ Applications," describes how to add OLE controls to your applications using Visual C++ 4. Part IV concludes
file:///H|/0-672-30913-0/vcgfm.htm (15 of 23) [14/02/2003 03:05:27 ]
vcgfm.htm
with Chapter 18, "Translating Visual Basic for Applications Code to Visual C++," for Access developers who are porting Access applications to Visual C++.
Part V
Up until Part V, "Multiuser Database Applications," this book is devoted to self-contained applications designed for a single user. Part V provides the background and examples you need to add networking and client-server database capabilities to your Visual C++ database applications. Examples employ Windows 95, Windows NT Server 3.51, and SQL Server 6 for Windows NT. Chapter 19, "Running Visual C++ Database Applications on a Network," describes how to use peer-to-peer and network servers to share databases among members of a workgroup or throughout an entire organization. Chapter 20, "Creating Front Ends for Client-Server Databases," describes how to use the ODBC API to set up and connect to client-server and mainframe data sources with your Visual C++ applications. Decision-support and online transaction processing examples that connect Microsoft SQL Server for Windows NT are included. Chapter 21, "Interacting with Microsoft Mail, MAPI, and TAPI," details the use of the MAPI custom control, the Schedule+ Access Library (SAL), and TAPI. Examples of using Microsoft's Electronic Forms Designer (EFD) to create mail-enabled Visual C++ applications are provided as well. Chapter 21 also gives you a brief glimpse of what you can expect when you begin to develop applications for Microsoft Exchange.
Part VI
Part VI, "Distributing Production Database Applications," shows you that no production database application is complete without full documentation and an online help system for users. Chapter 22, "Documenting Your Database Applications," shows you how to use Visual C++'s database object collections to create a data dictionary in the form of a text file that you can import into other applications, such as Word for Windows 7 or Excel 7. Chapter 23, "Creating Help Files for Database Applications," describes how to use Word for Windows and commercial WinHelp assistants, such as Doc-To-Help and RoboHelp, to speed the addition of contextsensitive help to your Visual C++ applications. Chapter 24, "Creating Distribution Disks for Visual C++ Applications," shows you how to create a professional installation application that uses either the Microsoft Setup application for Visual C++ or other mainstream Windows setup applications.
Appendixes
The appendixes provide useful reference data. Appendix A, "Resources for Developing Visual C++ Database Applications," lists add-in products that offer new features to your database applications. It also lists publishers of periodicals devoted to Visual C++ and databases in general. Suppliers and publishers are categorized by
file:///H|/0-672-30913-0/vcgfm.htm (16 of 23) [14/02/2003 03:05:27 ]
vcgfm.htm
subject, and entries include addresses, telephone numbers, and fax numbers, as well as brief descriptions of the products listed. Appendix B, "Naming and Formatting Conventions for Visual C++ Objects and Variables," describes the prefix tags used in this book to identify the object or data type of variables. These naming conventions are based on a slightly modified form of Hungarian Notation that is commonly used in C and C++ programming. This notation was invented by Charles Simonyi, who at one time worked on the development of Access 1.0 at Microsoft. Appendix C, "Using the CD-ROM," describes the files included on the CD that comes with this book.
Accelerator key combinations (Alt-key) and shortcut key combinations (Ctrl-key) that you use to substitute for mouse operations are designated by joining the key with a hyphen (-). Ctrl-C, for example, is the shortcut key for copying a selection to the Windows clipboard. Alt-H is a common accelerator key combination that takes the place of clicking the Help button in dialog boxes. Some applications, such as Microsoft Word 6, use multiple-key shortcuts, such as Ctrl-Shift-key, to activate macros. Menu options are separated by a vertical bar. For example, "File | Open" means "Choose the File menu and select the Open option."
Visual C++ Code, SQL Statements, and Source Code in Other Languages
Visual C++ code, Visual C++ reserved words, keywords (such as the names of collections), SQL statements, and source code fragments in other programming languages appear in monospace type. Reserved words and keywords in ANSI SQL and xBase programming languages appear in uppercase monospace. Here is an example of the formatting of an SQL statement:
SELECT Name, Address, City, Zip_Code FROM Customers WHERE Zip_Code >= 90000
vcgfm.htm
USE customers LIST name, address, city zip_code WHERE zip_code >= 90000 The code line continuation character [ic:ccc] is used when the length of a code line exceeds the margins of this book. For example:
EXF=3.0;File;&Export Folder...;11;IMPEXP.DLL;0;;Exports folders to a [ic:ccc]backup file;MSMAIL.HLP;2860 Note that this separator character isn't a valid character when embedded in an SQL statement String variable, nor is it valid in C/C++ source code. Wherever possible, C/C++ code is broken so that the [ic:ccc] character isn't needed. Some listings created by Visual C++'s AppWizard contain lines that are too long to fit on one line of this book. These lines are printed on two lines. When you look at the code in an editor, however, you will see these lines as one long line. Special implementations of SQL that don't conform to ANSI SQL-92 standards, such as the {ts DateVariable} syntax that Microsoft Query uses to indicate the timestamp data type, appear as in the SQL dialog box of the application. The PIVOT and TRANSFORM statements of Access SQL that (unfortunately) weren't included in SQL-92, however, retain uppercase status.
Entries in Windows, Visual C++, and Microsoft Access 2 initialization (.INI) files appear in monospace type. Sections of .INI files are identified by square brackets surrounding the section identifier, as in
[Options] SystemDB=c:\vbapps\system.mda Entries that you make in Windows 3.1+'s registration database using the registration database editor application, REGEDIT.EXE, in verbose mode also appear in monospace to preserve indentation that indicates the level of the entry in the tree (file-directory-like) structure of registration database entries. The full path to HKEY_CLASSES_ROOT and other entries in the Registry of Windows 95 and Windows NT is provided unless otherwise indicated in the accompanying text.
vcgfm.htm
As mentioned earlier, all examples of Visual C++ code, as well as code examples, appear in monospace. Monospace type also is used for code fragments. Styles and weights are applied to code examples and fragments according to the following rules:
q
Names of symbolic constants, including the three constants TRUE, FALSE, and NULL, appear in uppercase. Additions to Visual C++ AppWizard-produced programs typically appear in bold monospace. This lets you see what has been added to create the application's functionality. Replaceable variable names, arguments, and parameters, also known as placeholders, appear in italic monospace. Data type identification tag prefixes that identify the data type of variables of Visual C++'s fundamental data types and the type of object for variables of object data types don't appear in italic, as in int nObjectVar. French braces ({}) indicate that you must select one of the optional elements separated by the pipe character (|) and enclosed in the braces. This doesn't apply to the unusual employment of French braces by Microsoft Query in SQL statements. An ellipsis (...) indicates that the remaining or intervening code required to complete a statement or code structure is assumed and doesn't appear in the example.
The code examples in this book use two- or three-letter prefix tags to identify the data type of variables and symbolic constants of the fundamental data types of Visual C++ and other Object Basic dialects, as well as object variables. Examples of Hungarian variable names of the fundamental data types are szStringVar, nIntegerVar, lLongVar, dDouble, and pPointer. Microsoft and this book use the term fundamental data type to distinguish conventional variables, which have names that are Visual C++ reserved words, from variables of object data types, which might have names that are either reserved or keywords. Prefix tags also are used to identify the type of object when you declare variables of the various object data types supported by Visual C++. The most common object prefix tags in this book are wsWorkSpace, dbDatabase, and qdfQueryDef. Appendix B provides detailed information on the derivation and use of type identifier prefix tags.
file:///H|/0-672-30913-0/vcgfm.htm (19 of 23) [14/02/2003 03:05:27 ]
vcgfm.htm
The following books are designed to introduce you to Visual C++'s event-driven graphical programming environment: Essential Visual C++, by Mickey Williams (Indianapolis: Sams Publishing, 1995, ISBN: 0-67230787-1) Teach Yourself Visual C++ 4 in 21 Days, by Nathan and Ori Gurewich (Indianapolis: Sams Publishing, 1994, ISBN: 0-672-30795-2) Visual C++ in 12 Easy Lessons, by Greg Perry and Ian Spencer (Indianapolis: Sams Publishing, 1995, 0-672-30637-9) What Every Visual C++ 2 Programmer Should Know, by Peter Hipson (Indianapolis: Sams Publishing, 1994, ISBN: 0-672-30493-7)
The following books cover intermediate-to-advanced Visual C++ programming topics: Develop a Professional Visual C++ Application in 21 Days, by Mickey Williams (Indianapolis: Sams Publishing, 1995, ISBN: 0-672-30593-3)
file:///H|/0-672-30913-0/vcgfm.htm (20 of 23) [14/02/2003 03:05:27 ]
vcgfm.htm
Master Visual C++, Third Edition, by Nathan and Ori Gurewich (Indianapolis: Sams Publishing, 1995, ISBN: 0-672-30790-1) Visual C++ 2 Developer's Guide, Second Edition, by Nabajyoti Barkakati (Indianapolis: Sams Publishing, 1995, ISBN: 0-672-30663-8) Visual C++ 4 Unleashed, by Viktor Toth (Indianapolis: Sams Publishing, 1996, ISBN: 0-672-30874-6)
The following book offers an excellent reference to the Microsoft Jet database engine: Microsoft Jet Database Engine Programmer's Guide, by Dan Haught and Jim Ferguson (Redmond: Microsoft Press, 1996, ISBN: 1-55615-877-7)
If you want to fully understand the history and implementation of the American National Standards Institute's X3.135.1-1992 standard for SQL-92, you need a copy of Jim Melton and Alan R. Simpson's Understanding the New SQL: A Complete Guide (San Mateo: Morgan Kaufmann Publishers, 1993, ISBN: 1-55860-245-3). Jim Melton of Digital Equipment Corp. was the editor of the ANSI SQL-92 standard, which comprises more than 500 pages of fine print.
The syntax of SQL is the subject of a standard published by the American National Standards Institute (ANSI). At the time this book was written, the current standard, X3.135.1-1992 or SQL-92, was available from The American National Standards Institute 11 West 42nd Street
file:///H|/0-672-30913-0/vcgfm.htm (21 of 23) [14/02/2003 03:05:27 ]
vcgfm.htm
New York, NY 10036 (212) 642-4900 (Sales Department) The SQL Access Group (SAG) consists of users and vendors of SQL database management systems. SAG publishes standards that supplement ANSI X3.135.1-1989, such as the Call-Level Interface (CLI) standard used by Microsoft's ODBC API. You can obtain SAG documents from The SQL Access Group 1010 El Camino Real, Suite 380 Menlo Park, CA 94025 (415) 323-7992 (extension 221)
Periodicals
The following are a few magazines and newsletters that cover Visual C++ or Access, or in which articles on either appear on a regular basis:
q
Data Based Advisor is published by Data Based Solutions, Inc., a firm related to the publishers of Access Advisor. Data Based Advisor covers the gamut of desktop databases, with emphasis on xBase products, but Visual C++ receives its share of coverage, too. DBMS magazine, published by M&T, a Miller-Freeman company, is devoted to database technology as a whole, but it concentrates on the growing field of client-server RDBMSs. DBMS covers subjects, such as SQL and relational database design, that are of interest to all developers, not just those who use Visual
vcgfm.htm
C++.
q
Smart Access is a monthly newsletter from Pinnacle Publishing, Inc., which publishes other databaserelated newsletters and monographs. Smart Access is directed primarily at developers and Access power users. This newsletter tends toward advanced topics, such as creating libraries and using the Windows API with Access and Visual C++. A diskette is included with each issue. Windows Watcher, Jesse Berst's (now Ziff-Davis') monthly newsletter, analyzes the market for Windows applications, reviews new products for Windows, and provides valuable insight into Microsoft's future plans for Windows 95, Windows NT, and Windows applications, including Visual C++.
The majority of these magazines are available from newsstands and bookstores. Names and addresses of the publishers are listed in Appendix A.
Microsoft sells three sets of subscription services available on CD-ROM called MSDN (Microsoft Developer Network) Level I, MSDN Level II, and MSDN Level III. Level I support consists of quarterly disks that contain a vast amount of information and documentation. These CDs are very valuable to all programmers. Level II support consists of Level I and all the development tools (excluding compilers) such as SDKs and platforms (including versions of Windows, Windows 95, and Windows NT) that Microsoft offers, excluding the BackOffice suite. These CDs (the count varies from quarter to quarter, but usually there are about 20 CDs per release) provide a firm foundation for all professional developers at a very affordable price. Level III support includes Level II plus the Microsoft BackOffice development system.
Microsoft and other firms sponsor services on the Internet. Some of the best support can be obtained, without charge, by using the Microsoft Knowledge Base product, which can be accessed from the Internet. This allows you to query the database that the Program Support Services (PSS) people use and obtain answers to technical questions.
vcgp1.htm
vcg01.htm
s s
Choosing Visual C++ as Your Database Development Platform Using Visual C++ as a Database Front End s Database Front-End Generators s Visual C++ and SQL s Classifying Database Front-End Applications s Database Front Ends for Decision Support s Transaction-Processing Applications Categorizing Database Management Systems for Visual C++ s Traditional Desktop RDBMSs s Client-Server RDBMSs s Access: A Nontraditional Desktop RDBMS s Mainframe and Minicomputer Database Management Systems Abandoning Traditional Database Programming Languages s Adapting to the Windows Event-Method Environment s Dealing with Programming Objects s The Data Types of Visual C++ The Data Access Objects of Visual C++ s Object Collections s The Data Control Object OLE and Visual C++ s OLE Automation s Visual Basic for Applications Summary
vcg01.htm
Paradox for Windows are categorized as desktop databases. Since its introduction, Visual Basic has had better support for database interface than Visual C++. Only with Visual C++ 4 has the C/C++ programmer had a real interface with the Microsoft Jet database engine. Desktop databases are applications that employ a proprietary (native) database file or index structure (or both) and that can be run on a PC of moderate performance (for example, an 80486 with 8M of RAM). Desktop databases also include an application programming language that is designed specifically for use with the native database structure. When the first edition of this book was written, Microsoft had sold more than four million copies of Access versions 1.0, 1.1, and 2.0. Between mid-June and mid-November of 1995, Microsoft released Windows 95 and 32-bit "designed for Windows 95" versions of Access (7.0), Visual FoxPro (3.0), Visual Basic (4.0), and Visual C++ (4.0), together with the 32-bit Microsoft Office 95. Microsoft wanted to make sure that early adopters of Windows 95 would have 32-bit applications to run.
NOTE Access 7 for Windows 95 started shipping near the end of 1995, much later than Word 7 and Office 95.
Visual C++ is Microsoft's most extensive and powerful programming language. Microsoft's original objective for Visual C++ was to provide a powerful and flexible platform that programmers could use to create their own Windows applications while running under Windows. Microsoft achieved this goal with Visual C++ 1.0. Many experienced programmers abandoned DOS-based C, C++, and Pascal in favor of Visual C++ because they could develop Windows applications faster than with traditional programming languages while working with Windows' graphical interface. Microsoft enriched Visual C++ 1.5 with improvements in the interface and extensions to the MFC C++ libraries, while Visual C++ 2.x moved programmers into the 32-bit application world. Visual C++ 4.0 moves the programmer interface, class library, and feature set to a new high. With the introduction of Visual C++ 4.0, a new set of database features has been added. Visual C++ 4.0 supports DAO (Data Access Objects) in addition to ODBC and also greatly extends other support, such as the addition of container support for OLE Custom Controls. Independent firms have created a variety of utilities, libraries, and add-on features for Visual C++, the majority of which addressed database applications. There will be, in the very near future, a plethora of new OLE Custom Controls for database programmers. By early 1993, a Microsoft market study showed that more than 70 percent of Windows applications involved databases in one form or another. In October of 1995, Microsoft's Visual C++ product manager noted in a speech in Boston that between 40 and 60 percent of all Visual C++ applications were database oriented. Visual C++ can be expected to also be a popular database applications development tool. Even before the introduction of Visual C++ 4.0, with its data access objects (CRecordset, CDatabase, and CRecordView) that greatly enhance database functionality, C and C++ were major but unrecognized players in the Windows database market. The introduction of Visual C++ 4.0 has now pushed Visual C++ to be a strong competitor to Visual Basic in the database development platform arena. The failure of market research firms to place Visual C++ in the Windows database category caused significant distortion of the desktop database market statistics for 1993 and later. This chapter describes Visual C++'s role in database application development and shows how Visual C++, OLE (Object Linking and Embedding) automation, ODBC (Open Database Connectivity), DAO, and MFC fit into Microsoft's strategy to maintain its domination of the Windows application marketplace. This chapter also discusses the advantages and drawbacks of using Visual C++ for database applications and gives you a preview of many of the subjects that are
file:///H|/0-672-30913-0/vcg01.htm (2 of 33) [14/02/2003 03:05:33 ]
vcg01.htm
covered in detail in the remaining chapters of this book. It's becoming a 32-bit, "Designed for Windows 95" world out there, so this book concentrates on 32-bit application development with Visual C++ 4.0.
The 32-bit Microsoft Jet 3.0 database engine offers substantially improved performance compared to 16-bit Jet 2+. Jet 3.0 is multithreaded, with a minimum of three threads of execution. (You can increase the number of available threads by an entry in the Windows 95 or Windows NT Registry.) Overall optimization and code tuning also contributes to faster execution of Jet 3.0 queries. Visual C++'s built-in MFC classes, along with AppWizard, let you quickly create a form to display database information with little or no Visual C++ code. Chapter 2, "Understanding MFC's ODBC Database Classes," contains a sample program that actually has no programmer-written code at all. Visual C++ is flexible because it doesn't lock you into a particular application structure, as is the case with Access's multiple document interface (MDI). Nor do you have to use DoCmd instructions to manipulate the currently open database. Visual C++ 4.0 database front ends require substantially fewer resources than their Access counterparts. Most 32bit Visual C++ 4.0 database applications run fine under Windows 95 with PCs having 8M of RAM and under Windows NT 3.51+ in the 16M range. Microsoft says Access 95 requires 12M of RAM under Windows 95, but you need 16M to achieve adequate performance of all but trivial Access 95 applications. A typical Visual C++ database front-end program would probably run with satisfactory performance on a system with as little as 12M of RAM under Windows 95. This same program would run well under the same amount of RAM in future versions of Windows NT. OLE Custom Controls, not yet available in all other database development platforms, let you add new features to Visual C++ applications with minimal programming effort. Third-party developers can create custom control addins to expand Visual C++'s repertoire of data access controls. Custom controls can take the form of OLE Custom Controls for the 32-bit environments.
The most important benefit of selecting Visual C++ as your primary database development platform, however, isn't evident in Microsoft's feature list. There is a vast array of tools and support for ODBC and database development with Visual C++ today. Examples of the use of Visual C++ are found throughout this book.
vcg01.htm
Another reason for choosing Visual C++ for database application development is its OLE compatibility. At the time this book was written, Visual C++ was the best database development environment that incorporated OLE. OLE automation is likely to be the most significant OLE feature for the majority of Visual C++ database developers. OLE automation lets you control the operation of other OLE-compliant server applications from within your Visual C++ database application. Applications need not include Visual Basic for Applications to act as OLE automation source applications (servers); Word for Windows 6 and later supports OLE automation using the conventional Word Basic macro language syntax. The Windows database war wasn't over at the time this book was written (heck, it may never be over), but Microsoft's multipronged attack with Visual Basic, Access, Visual FoxPro, SQL Server, and Visual C++ is forcing competing publishers of desktop database managers into their defensive trenches. As a group, Microsoft's database applications for Windows, together with ancillary products such as ODBC, DAO, and the Access Jet database engine, have a breadth and depth that no other software publisher presently can match.
Table 1.1. Visual C++-compatible databases and file drivers. Access Database Engine Drivers Microsoft ODBC Drivers Third-Part ODBC Drivers Access (.MDB) Btrieve (.DAT) dBASE III+ (.DBF, .NDX) Microsoft SQL Server Oracle 6 Sybase SQL Server Digital Rdb Gupta SQLBase HP AllBase/SQL
vcg01.htm
dBASE IV (.DBF, .MDX) FoxPro (.DBF, .CDX, .IDX) Paradox (.DB, .PX)
Excel (.XLS)* Text (.TXT)* Access* Btrieve* dBASE III+* dBASE IV* FoxPro 2.x* Paradox*
HP Image/SQL IBM DB2, DB2/2 IBM OS/2 DBM IBM SQL/DS Informix Ingres NCR Teradata NetWare SQL Progress Tandem Nonstop SQL Watcom SQL XDB
NOTE Databases and files in the Microsoft ODBC Drivers column that are marked with an asterisk (*) are included in the Microsoft ODBC Desktop Database Drivers kit (16-bit) and Microsoft Query. With the exception of the ODBC driver for Rdb supplied by Digital Equipment Corporation and the Watcom SQL driver, the third-party drivers listed in the third column of Table 1.1 are products of Intersolv Software. Intersolv Software offers the same collection of ODBC drivers as Microsoft, except for the Access ODBC driver. Other database suppliers and third-party developers supply ODBC database drivers that you can use with Visual C++. A list of suppliers of ODBC database drivers appears in Appendix A, "Resources for Developing Visual C++ Database Applications." Windows 95 currently is being shipped with no ODBC drivers. Drivers are released from time to time by Microsoft, and Visual C++ 4.0 includes a full set of redistributable 32-bit ODBC drivers (for both Windows 95 and for Windows NT). Some of the original Windows NT ODBC drivers don't work well under Windows 95, so programmers might be well advised to test their applications under both platforms and with as many ODBC drivers as possible.
The Access database engine that is included with ODBC lets you use dBASE III+, dBASE IV, FoxPro, Paradox, Btrieve, and Access databases with equal facility. Microsoft's ODBC Administrator application and the ODBC drivers created by Microsoft and third-party developers add at least 20 additional databases and file types to the list of those with which a Visual C++ database application can be connected. Only Access can rival Visual C++'s universal database connectivity. Details of the two methods of adding database functionality to Visual C++ applications are given in Chapter 6, "The Microsoft Jet Database Engine."
vcg01.htm
NOTE To use Btrieve databases with Visual Basic 4.0, you need a Windows dynamic link library (DLL) that is included with the Btrieve for Windows application and other Btrieve products. Appendix A of this book, "Resources for Developing Visual C++ Database Applications," provides information on how to obtain the required Btrieve DLLs.
Borland's Paradox 7 for Windows 95 takes a tentative step in the multidatabase direction by letting you use Paradox or dBASE files interchangeably. If you want to, you can create a FoxPro or Paradox application that doesn't involve a single database file. However, you need to open an .MDB file to use Access; only a few Access database utility functions are available before you open a new or existing .MDB file. Included with Microsoft Office and the 16-bit versions of Visual C++ is an add-in application, MS Query, that lets you create new Access databases as well as add, delete, and modify tables in new or existing Access, dBASE, FoxPro, and Paradox databases. Figure 1.1 shows MS Query's Table window for the Orders table of NorthWind.MDB, the sample Access database supplied with Access. Figure 1.1. Visual C++'s data manager application, MS Query.
NOTE Visual C++ 1.5x includes the MS Query product and a second NWIND database; however, this example is a dBASE format database, not an Access format. If you need a sample dBASE database, you can use this one. Since dBASE database files aren't specific to 16-bit or 32-bit applications, this database will work with any of the dBASE ODBC drivers.
The Table window displays the structure of the existing fields of a table and lets you add new fields and indexes to a table. MS Query is an example of a Visual C++ database application that uses MDI forms. MDI lets you create database applications with several windows (called MDI child windows or forms) that are contained within a conventional window (called the parent window or form). The Microsoft ODBC Administrator application, included with Visual C++, lets you connect to the Microsoft and Sybase versions of SQL Server and to Oracle client-server relational database management systems. Client-server RDBMSs are discussed later in this chapter. You can even treat text files and Excel worksheets as database tables by using the Microsoft ODBC Desktop Database Drivers kit. Independent software development firms, such as Intersolv Software, provide a variety of ODBC drivers for client-server and desktop databases, as well as for worksheet and text files. Some of Intersolv's ODBC drivers provide features that aren't available when you use the Access database engine; an example is Intersolv's capability to employ transactions with dBASE III+, IV, and 5.0 files. Figure 1.2 shows the ODBC Setup window for the Pubs sample database of Microsoft SQL Server 4.2 running on LAN Manager 2.2. Using ODBC drivers with Visual C++ is the subject of later sections in this chapter and, in fact, the entire book. The Intersolv DataDirect ODBC pack supports ALLBASE, Btrieve, CA-Ingres, Clipper, DB2, DB2/2, DB2/6000, dBASE, Excel, FoxBase, FoxPro, Gupta SQLBase, IMAGE/SQL, INFORMIX, InterBase, Microsoft SQL Server, Oracle, Paradox, PROGRESS, Scalable SQL (formerly Netware SQL), SQL/400, SQL/DS, SYBASE System 10, SYBASE SQL Server 4, Teradata, text files, and XDB.
file:///H|/0-672-30913-0/vcg01.htm (6 of 33) [14/02/2003 03:05:33 ]
vcg01.htm
Figure 1.2. The ODBC Setup window for the Microsoft SQL Server.
NOTE If you have Visual C++ 4.0, you can distribute the Microsoft ODBC Administrator application and the Microsoft/Sybase SQL Server or Oracle ODBC drivers with your Visual C++ applications. The Microsoft ODBC Desktop Database Drivers kit and Intersolv Software ODBC drivers require payment of a license fee for distribution. Contact Microsoft Corporation or Intersolv for the terms, conditions, and costs of distribution licenses. The file \MSDEV\REDIST\MSVC15\REDIST\REDISTRB.WRI contains details on distribution of the ODBC drivers. These drivers can be used with applications under both Windows 95 and Windows NT. The 16-bit ODBC can be used with legacy 16-bit ODBC applications under Windows 95, but these drivers can't be used under Windows NT, nor can they be used with 32-bit applications under Windows 95. For 32-bit applications, Intersolv ODBC drivers look like the best alternative when it is necessary to use non-Microsoftsupplied ODBC drivers.
Visual C++'s broad spectrum of database connectivity makes Visual C++ an excellent candidate for developing database front-end applications. The term database front end is used to describe a computer application that can select items of data contained in a database and display the chosen data items as information that is meaningful to the user. The database system itself is called the back end. The back-end database is, at the minimum, a collection of related tables. Traditional desktop database managers store these related tables as individual files in a single directory, together with index files that are used to speed the data-gathering process. Access and client-server RDBMS store all related tables and indexes in a single database file. Microsoft has achieved dramatic success in making the Windows graphical user interface (GUI) a worldwide standard for use on corporate PCs. At the time this book was written, Microsoft claimed to have sold more than 25 million copies of Windows 3.x. Windows 95 earned Microsoft more than $260 million in the quarter when it was released and more than $180 million in the following quarter. Windows 95 was released in August of 1995. Even in early 1995, Windows 95 had garnered enormous attention. Thus, it's no surprise that virtually all of today's database front ends are being created to run under Windows 95, or Windows NT. With Visual C++ and Access, Microsoft also has the upper hand in creating Windows database applications that employ a variety of database structures. Wide-ranging database connectivity is one of the major elements of Microsoft's strategy to obtain a major share of the enterprise-wide computing market.
This book uses the term front-end generator to describe a Windows application with which you can quickly create a database front-end application for a wide variety of desktop and client-server RDBMSs. Theoretically, any programming language that can create executable files for Windows can qualify as a front-end generator. You can write a Windows
file:///H|/0-672-30913-0/vcg01.htm (7 of 33) [14/02/2003 03:05:33 ]
vcg01.htm
front end by using Visual Basic, C, C++, or Pascal compilers; and many large-scale MIS applications are written in C or C++. Writing even a simple Windows database front end in Visual Basic, however, requires a major programming effort that fails the "quickly" test. Visual Basic isn't as easy to use as it is sometimes purported to be. Thus, this book restricts the classification of front-end generators to the following two types of products:
q
User-definable query processors let users create queries against a variety of RDBMSs by point-and-click or dragand-drop techniques. A query is an SQL statement that you use to select records for display or updating. (SQL is discussed in more detail later in this book.) Query processors don't include a programming language per se, but many of these products provide a scripting or macro language to automate repetitive tasks. Some query processors include a graphical forms designer so that users can determine the appearance of the information returned by the query. Asymetrix InfoAssistant is a new 32-bit user-definable query processor that can deal with a variety of desktop and client-server databases. Channel Computing's Forest and Trees application is one of the more popular Windows query processors. Microsoft Query, which replaces the Intersolv add-in application included with earlier versions of Excel, offers drag-and-drop query generation based on the methods employed by Access's query design window. Front-end development tools include, at the minimum, a graphical-forms designer and an application programming language. Queries are created by using graphical QBE (query by example) or by embedding SQL statements in a program. One of the tests of a front-end development tool is the product's capability to create a user-definable query processor. Microsoft Visual C++, Access, and FoxPro qualify in this category, as does PowerSoft Corporation's PowerBuilder. FoxPro qualifies because FoxPro can use ODBC to connect to a variety of database back ends.
More than 200 commercial Windows front-end generators were available at the time this book was written, about evenly divided between the two preceding categories. Most of these products also include a report generator to print formatted data. The retail version of Access uniquely qualifies in both categories of front-end generators because Access's user interface (UI) is simple enough that nonprogrammers can create their own database applications. Presently, Access is one of Visual C++'s most viable competitors in the front-end development tool market, as is Visual Basic. A critical requirement of any front-end generator is the capability to transfer data to other Windows applications easily. Copying database information to the Windows Clipboard and pasting the Clipboard data into a compatible application, such as Excel, provides basic interapplication or interprocess communication (IPC) capability. Windows DDE (dynamic data exchange) is the most universal method of automatically transferring data to and from database front ends; however, DDE implementations, other than pasted dynamic links, seldom meet the "easily" part of the requirement. Visual C++ currently offers a combination of database connectivity and OLE compatibility.
If you aren't proficient in SQL, you probably will need to learn yet another programming language to create database front ends with Visual C++. To select the data you want from a database attached to a Visual C++ application, write the necessary SQL statement and then send the statement as a string variable to the Access database engine or an ODBC driver. SQL (properly pronounced "S-Q-L," not the more common "sequel" or "seekel") is the lingua franca of relational database systems. SQL has its roots in a language called SEQUEL (Structured English Query Language), which IBM developed at its San Jose Research Laboratory in the mid-1970s. SEQUEL later became SEQUEL/2 and ultimately was renamed SQL. The first two relational databases to use SQL were Oracle, developed by Relational Software, Inc. (now
file:///H|/0-672-30913-0/vcg01.htm (8 of 33) [14/02/2003 03:05:33 ]
vcg01.htm
Oracle Corporation), and IBM's SQL/DS. The purpose of SEQUEL and its successors was to provide a relatively simple, nonprocedural programming language to manipulate relational database systems. Visual C++ is a procedural language: You write a series of statements, such as if...else, to instruct the Visual C++ compiler to generate a series of instructions in a sequence you define. You control how the program executes to achieve the result you want. A nonprocedural language, on the other hand, expects you to write a series of statements that describes what you want to happen, such as SELECT * FROM TableName. The application that processes the statement determines how the statement is executed and simply returns the resultin this case, all the records contained in TableName. One of the advantages of using SQL to manipulate relational databases is that the language has been standardized by a committee (X3.135) of the American National Standards Institute (ANSI). The first standardization effort began in the mid-1980s; ANSI X3.135-86 (SQL-86) specified the first standardized version of SQL. The 1986 standard was updated in 1989 (SQL-89) and in 1992 (SQL-92). Developers of RDBMSs that use SQL are free to extend the language in any way they see fit; however, SQL-reserved words that are included in the ANSI standard must return the result specified by the standard. Extended SQL languagessuch as Transact-SQL, which is used by the Microsoft and Sybase SQL Server RDBMSoffer useful extensions to SQL. Some implementations of SQL, such as IBM's version for DB2, don't comply with the latest ANSI standards; for instance, you can't use the AS keyword to assign a derived column name to a DB2 column that contains a value expression, such as SUM(Expr).
NOTE Database programmers and many users usually use the term xBase to refer to database back ends that use dBASE-compatible files. With a dBASE database, each table and index is contained in a separate file.
Users of xBase RDBMSs, such as dBASE and FoxPro, will find the structure of SQL statements to be quite similar to the interactive xBase statements that you enter at the dot prompt. In this book, xBase refers to any desktop relational database management system that uses the dBASE file structure and supports, at a minimum, all the commands and functions of the dBASE III+ programming language. The two xBase statements executed at the dot prompt are
USE customer LIST name, address, city, state, zip_code FOR zip_code >= 90000 and the single SQL statement contained in a Visual C++ string variable:
SELECT name, address, city, state, zip_code FROM customer WHERE zip_code >= 90000 Both return the same result: a list of the names, addresses, cities, states, and zip codes of all customers whose zip codes are equal to or greater than 90000. Most of the recent implementations of desktop RDBMSs include SQL implementations that have varying degrees of conformance to the ANSI SQL-89 specification. Access's dialect of SQL conforms quite closely to ANSI-89 syntax, but it's missing the Data Definition Language (DDL) elements of SQL that you need to create databases and tables with
file:///H|/0-672-30913-0/vcg01.htm (9 of 33) [14/02/2003 03:05:33 ]
vcg01.htm
conventional SQL statements. Access SQL also omits the Data Control Language (DCL) that lets you GRANT or REVOKE privileges for users to gain access to the database or the tables it contains. Access SQL compensates for this omission, at least in part, by providing the TRANSFORM and PIVOT keywords that let you create very useful crosstab queries (which are described in a moment). Chapter 5, "Learning Structured Query Language," describes the structure of SQL statements and how to implement SQL in your Visual C++ code.
Database front-end applications that you create with front-end generators fall into two broad categories:
q
Decision-support applications that only let you display and print information culled from the database by predefined (hard-coded) or user-defined queries Transaction-processing front-end applications that include the capability to edit data or add data to the database
The following sections describe the basic characteristics of these two categories of database front ends.
Decision-support applications represent the most common type of database front-end application. Single-purpose decision-support front ends typically display sales information for selected customers or products. At the other end of the decision-support spectrum, complex management information systems (MIS) provide summarized information concerning virtually all of the quantifiable aspects of large organizations' operations. Decision-support applications usually involve read-only access to the data in the underlying database. Chapter 9, "Designing a Decision-Support Application," is devoted to writing Visual C++ code to display information gleaned from relational databases. Many decision-support front-end development tools include the capability to create graphs and charts based on summary data. Grouping and totaling data to create summary information often is called rolling up the data. The Access database engine lets Visual C++ decision-support applications perform crosstab rollups. Crosstab methods let you display summary data in worksheet format, usually as a time series. Using a crosstab query, you can display sales of products (in rows) by month or by quarter (in columns) directly from tables that contain only raw invoicing data. Crosstab queries is one of the subjects of Chapter 8, "Running Crosstab and Action Queries." Drill-down methods let you show the detailed data that underlies your summary information. In-house and independent database-application developers use Visual C++ to create a wide variety of single-purpose and MIS decision-support front ends. Here are the principal advantages of Visual C++ over competing front-end development tools for creating decision-support applications:
q
You can distribute unlimited numbers of your compiled Visual C++ front-end applications without paying royalties. Most other front-end generators require that you pay a license fee for each copy of the compiled frontend applications you install. (License fees for applications you create are called per-seat charges.) However, you
vcg01.htm
might need to pay a per-seat license fee for the ODBC drivers you use with your Visual C++ application if you need to use drivers other than those supplied with Visual C++.
q
The purchase price of Visual C++ is substantially less than the prices charged for other front-end generators with comparable or inferior feature sets. Few front-end generators support the Access SQL TRANSFORM and PIVOT statements, which let you quickly create crosstab queries when you use the Microsoft Jet database engine. Visual C++ applications can embed OLE graphic objects from applications such as Microsoft Graph. Visual C++ is OLE compliant (as a destination or client application and as a server application) and includes OLE Automation capability. You can use in-situ editing (also called in-place editing) and exercise control over other Windows applications that share OLE Automation capability. You don't need to learn a new and arcane programming language to develop Visual C++ database front ends. The structure and syntax of Visual C++ is closely related to traditional database programming languages such as xBase and the Paradox Application Language (PAL). Visual C++ is flexible. Often, constructs that are difficult or impossible using other development platforms can easily be created using Visual C++. Visual C++ is an object-oriented language. Visual C++ qualifies as a full-scale, object-oriented programming (OOP) language. It's likely that future versions will include an even more extensive implementation of MFC. Competing front-end development tools that claim to be object-oriented seldom reach Visual C++'s level of compliance with the object programming model.
Many of the advantages in the preceding list apply equally to decision-support and transaction-processing front ends. This list is by no means comprehensive. Many other advantages that derive from choosing Visual C++ as your database front-end development tool will become apparent as you progress through this book. Here are the principal drawbacks to using Visual C++ as a decision-support front end:
q
Visual C++ has limited support for graphics formats in image and picture boxes. Visual C++ supports only Windows bitmaps (.BMP and .DIB), icons (.ICO), and Windows metafile (.WMF) vector images. However, a variety of third-party add-ins and custom controls are available that dynamically convert .PCX, .TIF, .JPG, .GIF, and other common graphics file formats to one of Visual C++'s supported formats. It can be expected that there will be many OLE Custom Controls for graphics available to Visual C++ programmers in the next few years. With ODBC, Visual C++ lacks the direct capability to establish rules that enforce referential integrity at the database level with Access .MDB files, and ODBC can't add validation rules to enforce domain integrity at the Access table level. You need to write Visual C++ code to enforce referential and domain integrity in all supported databases when using ODBC. Visual C++ enforces referential and domain integrity rules that you establish when you create the database with Access. Many Visual C++ 4 front-end applications will use DAO for accessing Access databases. Visual C++ can't directly implement the security features inherent in Access .MDB databases when using ODBC. By default, Visual C++ doesn't use Access's SYSTEM.MDA file (or Access 7's SYSTEM.MDW file), which contains user names, passwords, and other security information for .MDB files created by Access. If your frontend application is used by members of only one workgroup, you can specify the name and location of the
vcg01.htm
workgroup's SYSTEM.MDA/MDW file in Visual C++'s VB.INI file or the Visual C++ APPNAME.INI file associated with the APPNAME.EXE file for your application. (Visual C++ expects the filename of the .EXE and .INI files to be the same.) If you have implemented or need to implement security features, such as adding new users to your Access database, you can use the Access ODBC driver (RED110.DLL). This lets you attach a SYSTEM.MDA/MDW file to implement database security instead of using a Visual C++ database object to connect directly to the Access database engine. Use the GRANT and REVOKE SQL reserved words to manage database- and table-level security. These limits don't apply to applications developed using the MFC DAO classes, however.
The limitations of Visual C++ are likely to affect only a small portion of the decision-support front ends you create for production-database applications. Future versions of Visual C++ probably will include an equivalent to Access's OLE object frame controls.
NOTE Unlike earlier versions of Visual C++, the Visual C++ 4.0 product includes a redistributable copy of the Access Jet database engine. The Jet engine is used by the DAO functionality of Visual C++ 4.0. This version of the Microsoft Jet database engine is 32-bit only and doesn't support 16-bit applications. There is no 16-bit version of the Microsoft Jet database engine for Visual C++.
Transaction-Processing Applications
Front ends for transaction processing let users update the tables of databases. Transaction processing involves editing or deleting existing records in the database tables or adding new records to the tables. Thus, users of transaction-processing applications need read-write access to the tables they want to modify. Transaction-processing applications require that either the database itself or your Visual C++ code preserve the integrity (related to accuracy) of the data. Enforcing domain (data value) integrity and referential (record) integrity in databases that users can update is covered in Chapter 4, "Optimizing the Design of Relational Databases." Transaction processing implies the capability of using the SQL reserved words COMMIT and ROLLBACK to execute or cancel pending changes to the tables, respectively. All modern client-server databases support COMMIT and ROLLBACK transaction processing, but only a few desktop databases incorporate native transaction-processing capabilities. Access databases, for example, support transaction processing internally, whereas dBASE databases do not. Visual C++ supports transaction processing with the functions SQLPrepare(), SQLTransact(), and the keywords SQL_COMMIT and SQL ROLLBACK. Chapter 15, "Designing Online Transaction-Processing Applications," shows you how to use Visual C++'s transaction-processing keywords to speed updates to RDBMS tables.
NOTE
vcg01.htm
ODBC drivers can provide transaction-processing capability for databases that don't ordinarily support SQL COMMIT/ SQL ROLLBACK transaction processing. Intersolv's dBASE ODBC driver, for example, lets you use SQL COMMIT or SQL ROLLBACK in your call to SQLTransact() that operates on dBASE tables.
In a multiuser environment, transaction-processing front ends must maintain database consistency and concurrency. Simplified descriptions of these two terms follow:
q
Consistency problems occur when the first user executes a transaction that updates a set of records and a second user attempts to view the records while the transaction is in process. Depending on the level of localization provided by the database management system, the second user might see the wrong data (called a dirty read), the wrong data followed by the right data (a nonrepeatable read), or erroneous data that results from the first user's transactions, which alter the rows that are included in the result of the second user's query (called phantom data). Concurrency problems result when two or more users attempt to update the same record simultaneously. Unless a method is provided of locking the values of data in a record until the first user's transaction completes, you can't predict which user's update will prevail. Database, table, page, and/or record locking are provided by most database management systems to overcome concurrency problems. Locking the entire database or one or more tables during a transaction is seldom a satisfactory method in a multiuser environment because of the lock's affect on other users. Page or record locking, however, can result in a condition called deadlock, in which two users attempt to execute transactions on the same records in a set of two or more tables. Client-server database management systems use a variety of methods to detect and overcome deadlock conditions. If you're using a desktop RDBMS, you usually need to write your own anti-deadlock code in Visual C++.
Both consistency and concurrency issues can be controlled by the locking methods employed in multiuser environments. Visual C++ supports the following locking methods:
q
Database-level locking for client-server and Access .MDB databases, in which your application opens the database for exclusive rather than shared use. Database-level locking ordinarily is used only when you alter the structure of a database or when you compact or repair an Access database. Table-level locking is available for all database types. A table lock opens a dBASE, Paradox, or Btrieve file for exclusive use. You open Access and client-server databases for shared use and then open the table for exclusive use. You can prevent consistency problems by setting the value of the Options property of the table to deny other users the capability to read the values in the table while it's locked. Dynaset-level locking locks all of the tables that are used by the Dynaset object. A Dynaset, a unique feature of Visual C++ and Access, is an updatable view of a database. Dynaset-level locking is available for all database types. To resolve consistency problems at the expense of concurrency, you can deny others the capability to read data with the Dynaset object's Options property. Record-level locking is used for databases whose tables have fixed-length records, such as dBASE, FoxPro, and Paradox. Record-level locking provides the highest level of concurrency. You open the table file for shared use to employ record-level locking. Page-level locking is used for Access and most client-server databases that use variable-length records. Access databases, for example, lock 2,048-byte pages. Thus, locking a single page also can lock many contiguous records
vcg01.htm
if each record contains only a small amount of data. Page-level locking usually results in a lower level of concurrency than record locking.
NOTE If you write a database program that appears unable to access a record because it's locked, but your application doesn't have that record locked, it's possible that the database page is locked by another application and that your program is actually functioning correctly.
Pessimistic locking applies only to record-level and page-level locking. Pessimistic locking locks the page(s) or record(s) as soon as a user employs the Edit or BeginTrans method and doesn't release the lock until the Update or CommitTrans method completes the edit, or until the edit is canceled with the Rollback method. Pessimistic locking is Visual C++'s default locking method that guarantees that your edit will succeed. Optimistic locking also is restricted to record-level and page-level locking. Optimistic locking places locks on the record or page only during the time that it takes for the Update or CommitTrans method to execute. Optimistic locking offers a greater degree of concurrency, but you can't be assured which of two simultaneous edits will prevail.
When you use a client-server RDBMS, the server back end usually handles the page-level locking process for you. The majority of client-server RDBMSs let you specify the level of locking and the page-level locking method to be employed through SQL keywords such as SQL Server's HOLDLOCK instruction. You need to use the SQL pass-through option when you want to use SQL reserved words that aren't included in Access SQL. SQL pass-through is discussed in the section "Client-Server RDBMSs" later in this chapter. The Access database engine can create and maintain indexes for each database type that the engine supports. You need a primary key index in order to update data contained in Paradox and client-server database tables. (Visual C++ doesn't use or maintain Paradox secondary or query speed-up indexes that are created on more than one column or that are designated as unique.) It's good database-programming practice to create indexes on the primary key field(s) of all of the tables in your database. (Visual C++, however, doesn't recognize indexes on primary key fields of dBASE or Btrieve tables as PrimaryKey indexes.) Adding indexes on the foreign key fields of related tables speeds queries that involve more than one table.
NOTE Visual C++'s ODBC drivers can neither read nor maintain the .NTX index files created for .DBF files by CA-Clipper applications. Intersolv Software offers an ODBC driver that can read and update CA-Clipper .NTX indexes. If you want to use Visual C++ front ends concurrently with CA-Clipper DOS applications, you need to use the Intersolv ODBC driver to convert all the database indexes to dBASE-compatible index file formats.
As you add more indexes to your tables, the speed of transaction processing operations decreases when you update the
file:///H|/0-672-30913-0/vcg01.htm (14 of 33) [14/02/2003 03:05:33 ]
vcg01.htm
data values contained in the indexed fields. Thus, the number of indexes you create for a table depends on whether the table is used primarily for decision-support or transaction-processing applications. Choosing the right index structure is discussed in Chapter 4.
NOTE Multiple indexes drastically slow the importation of data from unsupported file types, such as delimited text files, to your existing tables. When you import data, you might find it much faster to create a new table to hold the imported data, then index the new table and append the data from the new table to the existing table.
NOTE The 32-bit ODBC driver for Microsoft SQL Server also can be used with Sybase SQL Server and Sybase System 10, but the driver isn't supported by Microsoft for use with Sybase RDBMSs. When used with Sybase products, some features of Sybase System 10 aren't available when using the Microsoft driver.
The following sections describe the four basic categories of database management systems you can use with your Visual C++ database applications.
vcg01.htm
Traditional desktop RDBMSs, typified by dBASE and Paradox, use separate files for each table and index, or collection of indexes for a single table in the case of dBASE IV and later .MDX and FoxPro .CDX indexes. dBASE and Paradox tables use fixed-width (also called fixed-length) records. You specify the maximum size of each field of the Character data type. Data values shorter than the maximum size automatically are padded with blanks (spaces) to the maximum size of the field. Btrieve tables provide for variable-length character fields. Variable-length character fields can save a substantial amount of disk space if the length of data values in character fields varies greatly. The Visual C++ documentation defines a database comprising traditional desktop RDBMS table and index files as an external database. This book doesn't use the term external database because no complementary internal database is defined in Visual C++. The dBASE, FoxPro, Paradox, or Btrieve database is specified as the well-formed path to the directory that contains the table and index files that you need for your application. A well-formed path, also called a fullyqualified path, consists of the drive designator and the path to the folder that contains the table and index files, such as C:\VBDBS\DBASE. If your tables are stored on a file server (such as Windows NT or Windows 95) that uses the Uniform Naming Convention (UNC), you substitute the server name for the drive identifier, as in \\SERVER\VBDBS\DBASE. You specify the indexes to be used with each of the tables in individual .INF files located in the same directory. The filename of the .INF file is the same as the table file. Thus, the information file for CUSTOMER.DBF is named CUSTOMER.INF. If you use dBASE IV multiple-index files, only one entry is required: NDX1=CUSTOMER.MDX. For dBASE III+ indexes, the index files are identified by sequentially numbered entries, such as NDX1=CUSTNAME.NDX, NDX2=CUSTZIP.NDX, and so on, with each entry on a separate line. You need .INF files for dBASE III+, dBASE IV, and FoxPro files, but not for Paradox or Btrieve fields. When you create a dBASE or FoxPro table and specify an index, Visual C++ automatically creates the .INF files for you. To use existing .MDX or .NDX index files with your .DBF file, you need to use Windows Notepad or another text editor to create the .INF file. Btrieve's data definition file, FILES.DDF, serves the same purpose as the .INF file. Access can't create the FILES.DDF file for Btrieve databases. You need Xtrieve or a third-party Btrieve utility program to create the necessary Btrieve data definition file. Other requirements for the creation of Btrieve files are discussed in Chapter 6.
NOTE The Access database engine doesn't have the capability to remove deleted records from dBASE and FoxPro table files. You need an application that supports the xBase PACK statement to eliminate deleted records and recover the fixed disk space that the deleted records consume.
FoxPro and dBASE III+/IV memo files that are associated with database tables must be stored in the same directory as the table that contains a Memo field data type. If the associated memo file is missing or corrupted, you receive an error message from Visual C++ when you attempt to open the table file. With dBASE 5/Visual dBASE databases, you also have OLEOBJECT data types, which are stored externally from the main database file(s).
NOTE
file:///H|/0-672-30913-0/vcg01.htm (16 of 33) [14/02/2003 03:05:34 ]
vcg01.htm
It's good database-programming practice to place all the table, memo, and index files you need for your application in a single database directory. Some xBase applications, such as accounting products, require that groups of files be stored in different directories. Visual C++ lets you open more than one database at a time; thus, you can deal with table, memo, and index files that are located in more than one directory.
The manipulation of data in the table files and the maintenance of the indexes of traditional desktop databases are the responsibility of the database application. The application translates high-level statements, such as SQL's SELECT or dBASE's LIST expressions, into low-level instructions that deal directly with records in the table files. If you run queries from a workstation against large table files that are located on a network file server, a very large number of low-level instructions are sent across the network to the file server. When a large number of users attempt to run queries simultaneously, to the same or other tables on the server, performance can suffer dramatically because of network congestion.
NOTE There is no equivalent in Visual C++ to the record number associated with traditional RDBMS tables. Microsoft makes the valid point that record numbers are meaningless in SQL databases. (However, Access assigns record numbers to tables and query results that Access displays in datasheet mode.)
Client-Server RDBMSs
The term front end originally appeared in conjunction with client-server RDBMS applications. Front end refers to the client application that runs on a workstation connected to the server (back end) on a local area network (LAN) or wide area network (WAN). The rapid growth of the client-server database market in the 1990s is because users of mainframe and minicomputer database management systems want to downsize their information systems. Downsizing means substituting relatively low-cost file servers, most often based on PC architecture, for expensive mainframe and minicomputer hardware and database software products that are costly to maintain. Today's trend is toward distributed client-server systems. In distributed database systems, tables that contain the data to satisfy a query might be located on several different servers in widely varying locations that are connected by a WAN. The operating system for the server portion of the client-server RDBMS need not be (and often is not) the same as the operating system used by the client workstations. For example, Microsoft SQL Server 6 runs under Windows NT Server, and Sybase SQL Server runs under UNIX on minicomputers or as a NetWare Loadable Module (NLM) on Novell PC file servers. However, it's likely that the majority of both Microsoft and Sybase SQL server clients now run under the Windows graphical environment. Client-server systems differ greatly from desktop database management systems. The primary distinction is that all SQL statements issued by the front-end application are executed by the server. When a workstation sends a conventional
vcg01.htm
SELECT query to the server, only the rows that meet the query's specifications are returned to the workstation. The server is responsible for executing all SQL statements sent to the server by workstations. The server also handles all concurrency and consistency issues, such as locking. If a query issued by a workstation can't be completed by the server, the server returns an error message to the workstation. Combining high-level and low-level instruction processing at the server solves most network congestion issues. The majority of client-server RDBMSs store all databases in a single, very large file. Where necessary, the file can be divided between server computers, but the divided file is treated as a single file by the server's operating system. Clientserver RDBMSs include other sophisticated features, such as the maintenance of transaction logs that let databases be recreated in the event of corruption by a major hardware or software failure. Most client-server products now can use fixed disk arrays and mirrored fixed disks that reduce the likelihood that a failure of part or all of a single fixed disk drive will bring client services to a halt. The easiest method of connecting your Visual C++ database application to a client-server database is to use the appropriate ODBC driver. This book refers to a client-server database connected through an ODBC driver as a datasource. To open a connection to a datasource, you need to have previously defined the datasource with the ODBC Administrator application that is supplied with the Professional Edition of Visual C++ or another Microsoft application, such as Microsoft Query, that uses ODBC. You need the datasource name (DSN), a valid user login identifier (UID), and a valid password (PWD) to open a client-server datasource as a Visual C++ CDatabase object or to attach tables from the datasource to an open Access database.
NOTE Often, programs will have references to the MFC database objects (CDatabase, CRecordset, and CRecordView). The programmer who is writing only in C can get the same functionality using the SQL...() functions, which are supported by the ODBC connectivity libraries.
Although you can use the Access database engine to process queries against client-server databases that you open as a Visual C++ Database object or that you attach to an Access database, using the SQL pass-through option takes better advantage of the client-server environment. When you specify the use of SQL pass-through, the server processes the query and returns a recordset structure that contains the rows returned by the query (if any). The term recordset refers to any database object that contains data in the form of records. You also can use SQL pass-through to execute action queries that append, update, or delete records but don't return a query result set. SQL pass-through lets you execute stored procedures if your client-server database supports stored procedures. (The Microsoft and Sybase versions of SQL Server support stored procedures.) A stored procedure is a compiled version of a standard SQL query that your application uses repeatedly. Stored procedures execute much faster than conventional SQL queries, especially when the query is complex. Client-server RDBMSs vary widely in purchase price. As a rule, the price of the server software depends on the number of simultaneous workstation connections that the server supports. As with runtime versions of traditional RDBMSs, you purchase copies of the workstation software that are necessary to connect to the server. Microsoft SQL Server is currently the lowest-cost commercial client-server RDBMS available from a major software publisher. You can run Microsoft SQL Server as a service of the Microsoft LAN Manager 2.2 network operating system (NOS), under Novell NetWare, or under Windows NT. Chapter 20, "Creating Front Ends for Client-Server Databases," describes how to use these Microsoft RDBMSs with Visual C++ front ends.
vcg01.htm
Access deserves its own category because Access databases bear little resemblance to traditional desktop database structures. The Microsoft documentation for Visual C++ refers to both "Access databases" and "Visual C++ databases." It's likely that Microsoft intended these two terms to mean "databases created with Access" (which requires a SYSTEM.MDA file for versions of Access prior to 7 and SYSTEM.MDW for Access 7 and later) and "databases created with Visual C++" (which doesn't require SYSTEM.MDA or SYSTEM.MDW), respectively. For consistency, this book uses the term Access database no matter what application is used to create the .MDB file (which contains the actual data).
NOTE Access 95 replaces Access 1.x and 2.0 SYSTEM.MDA files with SYSTEM.MDW, called a workgroup file, which fulfills similar security functions. The .MDA file extension is now reserved for Access library files. Visual C++ 4.0 doesn't require SYSTEM.MDW or SYSTEM.MDA, but a workgroup file is needed if you want to take advantage of the Groups and Users collections to manipulate permissions for secure multiuser .MDB files.
As mentioned at the beginning of this chapter, Access is the default database type for Visual C++. Microsoft's choice for the default database type is understandable because Access .MDB files have a structure that is proprietary to Microsoft Corporation. Thus, you need to purchase a Microsoft product to use Access database files. All the Microsoft applications that can handle Access database files are Windows applications. It is highly unlikely that Microsoft will publish the intimate details of the Access .MDB file structure as an "open standard," at least in the foreseeable future. Despite the proprietary nature of Access database files, you're likely to find that Access is the database type to select when the choice is yours to make. Access database files include many of the features of the client-server databases described in the preceding section. Much of the architecture of Access .MDB files is based on the structure of Microsoft SQL Server database files. Here are some similarities between Access and client-server databases:
q
All the tables and indexes for a database are stored in a single .MDB file. Fields of the Text, Memo, and OLE Object field data types are variable-width. Access tables adjust the sizes of numeric fields to accommodate the fundamental data type used in the field. Date fields include time information. The Date field data type corresponds to the timestamp data type of SQL-92 but isn't stored in timestamp format. Access tables support null values. The null value, indicated by the keyword SQL_NULL_DATA, is different from the NULL identifier word in Visual C++ and indicates that no data has been entered in the data cell of a table. The null value isn't the same as an empty string (""). All client-server databases support null values, but few other desktop databases do. You can store query definitions, which are SQL statements saved as named objects, in Access databases. A
vcg01.htm
QueryDef object is similar to an SQL SELECT statement compiled as an SQL Server stored procedure.
q
Access Memo fields behave as if the field data type were Text and could hold up to 32,000 characters. The size of OLE Object (LargeBinary or BLOB, an acronym for binary large object) fields is limited only by the size of the database, which in turn is likely to be limited by your fixed-disk capacity, not by the Access .MDB file structure. You can store data of any type in an Access table's BLOB field using the Get Chunk and Append Chunk methods to read and write data to BLOB fields. BLOBs are usually used for graphics images. You can enforce referential integrity between tables and enforce domain integrity at the table level in Access databases. (Enforcement of domain integrity occurs only when you attempt to change the value of a field.) Access databases include built-in security features and might be encrypted. You need a second table, usually named SYSTEM.MDW, to implement the security features of Access databases.
Other advantages of using Access databases include the capability to attach tables of other supported database types. The Microsoft documentation contains ambiguous references to external tables and attached tables. As I mentioned earlier in this chapter, this book doesn't use the term external tables. You can gain a significant speed advantage if you attach tables from client-server databases to an Access database rather than opening the client-server data source as a Visual C++ CDatabase object.
NOTE You usually gain an even greater speed advantage when you use the SQL pass-through option to cause your SQL query statements to be executed by the database server rather than by the Access database engine.
If you have the appropriate software and hardware (called a gateway or MiddleWare), you can connect to several popular mainframe and minicomputer RDBMSs, such as IBM's DB2 or Digital Equipment Corporation's Rdb. Suppliers of gateways to DB2 databases that are compatible with ODBC include Micro Decisionware, Inc. (now part of Sybase); Information Builders, Inc.; Sybase; TechGnosis, Inc.; and IBM Corporation. (Additional information on these gateways is included in Appendix A.) In addition to the gateway, you need the appropriate ODBC driver for the mainframe or minicomputer database to which you want to connect. One of the principal commercial uses of Visual C++ is to create front ends for IBM DB2 databases.
NOTE IBM now offers DB2/2 for use under OS/2 version 2.x in both a single-user and a multiuser
vcg01.htm
version, and it is readying another DB2 variant for use under Windows NT. DB2/2 is the replacement for the OS/2 Database Manager (DBM) for OS/2 version 1.3. You can use the Intersolv DB2/2 ODBC driver with either the single-user or multiuser version of DB2/2 to emulate mainframe DB2 databases during development of your front-end application. Having a desktop version of DB2 can save many hours of negotiation with your DB2 database administrator when you need to restructure or reload your test database.
You can even use SQL statements to query nonrelational databases such as CODASYL network databases or hierarchical databases typified by IBM's IMS. Products such as Information Builder's EDA/Link for Windows and the IBI EDA/SQL database engine make network and hierarchical databases behave like client-server applications.
Creating database applications for the character-based environment of DOS traditionally has involved top-down programming techniques. Using xBase as an example, you start at the "top" with a main program, such as APPNAME.PRG, in which you declare your PUBLIC (Global) variables and constants, and then you add the code you need to create the DO WHILE .T....ENDDO main menu loop for your application. Next, you add the procedures that include more menu loops for the next level of submenus. Then you write the procedures that contain the @...SAY and @...GET statements to create the screens that constitute the core of your DOS application. Finally, you add the accouterments, such as data validation procedures and report printing operations. As an experienced database developer, you write modular source code. You've written your own libraries of standard procedures and user-defined functions that you reuse in a variety of applications. You also might employ add-in libraries created by other developers. If you use CAClipper, you spend a substantial amount of time recompiling and linking your application during the development cycle. To use Visual C++, you'll need to abandon most of the programming techniques to which you've grown accustomed and adopt Windows' object-oriented, event-driven, method-centered programming style. The first major difference you'll discover when you switch to Visual C++ as your database development platform is that you don't create a "main" program. The "main" program is Microsoft Windows. There is a hidden WinMain function in every Visual C++ program, but Windows itself has the final say on how your application executes. Your application can't execute any code until an event occurs because Visual C++ procedures begin as event handlers. Event handlers are methods that your application executes in response to events.
vcg01.htm
NOTE The preceding paragraph describes all Visual C++ functions as being event handlers. Even though a C program might not seem to be written by using event handlers, it actually is. With C++ programs created by using AppWizard, the event/function relationship is very visible through the ClassWizard interface.
Events originate from user-initiated actions, such as mouse clicks or keystrokes, that occur when a Visual C++ form is the active window. The active window is said to have the focus. During the time that your application is quiescent (when no events are being processed), Windows is in an idle loop, waiting for the next event. A Windows idle loop replaces the traditional for() {...} menu loops of character-based DOS applications. Your Visual C++ event-handling code is contained in modules that are matched to each dialog box and menu in your application. You can create modules and declare variables with global scope.
NOTE The structure of Visual C++ applications differs considerably from Access applications that include Visual Basic for Applications code. In Access, all Visual Basic for Applications code is contained in modules. Visual Basic for Applications event-handling code uses functions, and it's up to you to assign names to the event-handling functions. Visual C++ has no direct counterpart to Access's macro actions that you execute with DoCmd statements. It's possible to import Visual Basic for Applications code into Visual C++ applications, but you have to do a line-by-line conversion of the Visual Basic for Applications code. Chapter 18, "Translating Visual Basic and Visual Basic for Applications Code to Visual C++," details some of the differences between Visual Basic for Applications and Visual C++.
Visual C++ makes extensive use of object-oriented programming terminology to describe the components of applications. Visual C++ classifies dialog boxes, controls on dialog boxes, databases, and tables as objects. An object possesses characteristics and behavior. The characteristics of an object are the object's properties (data), and the behavior of the object is determined by its methods (incorporated in event-handling code). An object is a container for data and code. Objects can contain other objects; dialog boxes, for example, contain control objects. Each Visual C++ object has its own predetermined set of properties to which Visual C++ assigns default values. The methods that are applicable to a programming object are a set of Visual C++ reserved words that are pertinent to the class of the object. The set of methods that is applicable to dialog boxes differs greatly from the set of methods that is used with recordset objects.
vcg01.htm
Visual C++ lets you create object variables that refer to objects with the CObject * ObjectPointer and ObjectPointer = &Object statements. After these two statements are executed, ObjectPointer is a reference (pointer) to the original object. You can assign as many different variables to the same object as you want. If you add the reserved word new to the assignment statement, as in NewObject = new ObjectName, you can create a new instance of the original object. An instance of an object is a copy of the object that you can manipulate independently of the object you have copied. Object variables are an essential element of database application programming with Visual C++.
Variables declared with the xBase reserved words PUBLIC and PRIVATE default to the Logical data type and are assigned a new data type when they are initialized with a value other than .T. or .F.. Because xBase has only the four fundamental field data types used in dBASE III+ .DBF files (Character, Numeric, Date, and Logical), it's a simple matter for an xBase interpreter to determine the data type from the assigned value and to treat the variable according to its content. xBase is said to have weak data typing. In contrast, compiled languages such as Pascal and C have strong data typing. You must explicitly declare the data type when you name the variable. Early versions of the C language took the middle road to data typing: All variables were explicitly declared, but assignments between differing types were only weakly controlled. As the sophistication of C increased (actually, with the introduction of C++), C/C++ compilers began to more strictly enforce the usage of data types. You can still cast a variable of one type and assign the result to a variable of a differing type, but using explicit casts is no longer considered an acceptable programming technique. There are three problems with strong data typing when you're dealing with objects and databases:
q
You might not know in advance what data type(s) will be returned by an object when that object is created by another Windows application. For example, an object consisting of a range of cells in an Excel worksheet is likely to contain dates, strings, and numbers. The capability to accommodate indeterminate data types is an important consideration when you use OLE and its OLE Automation features. You can't concatenate variables of different fundamental data types without using data-type conversion functions. The need for data-type conversion complicates operations such as creating a composite index on table fields of different Field data types (such as Text and Date). This eliminates the need for indexing constructs, such as xBase's INDEX ON CharField + DTOS(DateField) TO IndexFile. Many database types now support null values. Conventional data types don't support the null value directly. The work-around, using SQL_NULL_DATA to specify a null value, is often cumbersome.
Visual C++ uses the SQLBindCol() function, which solves all the preceding problems of matching the SQL datatypes with Visual C++ variable types. An added benefit of the SQLBindCol() function is that you can use a number of different type conversions. Table 1.2 shows the acceptable conversions between C/C++ variable types and SQL data types. A D signifies a default conversion, a dot is a possible alternative conversion, and an empty space signifies that there is no conversion between these types. The types SQL_C_TINYINT and SQL_C_SHORT don't have default conversions.
vcg01.htm
Table 1.3 contains the SQL C types (shown in Table 1.2) matched to native C/C++ types. The DATE_STRUCT, TIME_STRUCT, and TIMESTAMP_STRUCT structures are defined in the SQLEXT.H header file. They make date and time manipulation easier. Table 1.3. SQL C types matched to native C/C++ types. SQL C Type Identifier SQL_C_BINARY SQL_C_BIT SQL_C_CHAR SQL_C_DATE SQL_C_DOUBLE SQL_C_FLOAT SQL_C_SLONG SQL_C_SSHORT SQL_C_STINYINT SQL_C_TIME ODBC C Type UCHAR FAR * UCHAR UCHAR FAR * DATE_STRUCT SDOUBLE SFLOAT SDWORD SWORD SCHAR TIME_STRUCT C/C++ Type unsigned char far * unsigned char unsigned char far * struct DATE_STRUCT double float long int short int signed char struct TIME_STRUCT
SQL_C_TIMESTAMP TIMESTAMP_STRUCT struct TIMESTAMP_STRUCT SQL_C_ULONG SQL_C_USHORT SQL_C_UTINYINT LDWORD UWORD UCHAR unsigned long int unsigned short int unsigned char
The three date and time structures are shown in Listing 1.1. These structures are defined in SQLEXT.H so that you don't have to define them in your application. Listing 1.1. SQL time and date transfer structures.
vcg01.htm
typedef struct tagTIME_STRUCT { UWORD hour; UWORD minute; UWORD second; } TIME_STRUCT; typedef struct tagTIMESTAMP_STRUCT { SWORD year; UWORD month; UWORD day; UWORD hour; UWORD minute; UWORD second; UDWORD fraction; } TIMESTAMP_STRUCT; // 0 to 9999 // 1 to 12 // 1 to valid number of days in the month // 0 to 23 // 0 to 59 // 0 to 59 // Nanoseconds // 0 to 23 // 0 to 59 // 0 to 59
vcg01.htm
q
CDatabase objects function as the linkage between the application and the actual dataset. In C programs, the functionality of the CDatabase object is available using the SQL...() functions. You can open and use as many simultaneous CDatabase objects as you want. CRecordset objects represent the results, or set of records, obtained from a dataset. The CRecordset object contains CDatabase tables contained in the CDatabase object. CRecordView objects are based on the CFormView class. With CFormView, your application functions much like any other dialog-based application. When you use AppWizard to create a Visual C++ application, the default is to use the CFormView class to display your records.
In the preceding list, the examples of the syntax of statements that create the database objects represent the simplest form of these statements. CDatabase and CRecordset objects have optional arguments or required calls to initialization functions to open and define the actual dataset. You set the value of the optional arguments based on the database type you choose and the type of locking you want.
Object Collections
A collection is a set of references to related objects, similar to but not identical to an array. The specification for creating and naming collections is included in the Microsoft OLE publication Creating Programmable Applications. The references (pointers) to objects in a collection are called members of the collection. Each member of a collection has a unique name and index value. Unlike arrays, however, the index number of a member may change, and index numbers need not be contiguous. It's possible for a collection to contain no members at all. Most collections have a property, Count, that returns the number of members of the collection. The index to a collection need not be an integer, but it usually is. Some objects use string indexes. The safest approach is to always specify the unique name of the member of a collection you want to use. The name of a collection is the English plural of the class of object in the collection. In Visual C++, collections might include dialog boxes (all dialog boxes that have been loaded by the application), controls (each control on a loaded dialog box), the data access object collections in the following list, and collections of objects exposed by OLE applications that support OLE Automation. This discussion is limited to data access objects that incorporate the following three object class collections:
q
TableDefs is the collection of TableDef objects that contain a description of each table contained in the database. Fields is the collection of Field objects for a TableDef object. Field objects contain a description of each field of a table. Indexes is the collection of Index objects for a TableDef object. Index objects describe each index on one or more fields of the table.
vcg01.htm
Visual C++ provides a CRecordView object that lets you add controls to a Visual C++ dialog box that may be used to display records from a dataset. Controls may be used to display and update data in the current record of a specified CRecordset object. Figure 1.3 illustrates a Visual C++ application's use of the CRecordView dialog box and controls to display and update information contained in the Customers table of NorthWind.MDB (supplied with Access). Figure 1.3. A CRecordView-based application. The advantage of using the CRecordView object is that you can create a form to browse through the records in a CRecordset object without writing any Visual C++ code at all. The source for this program is in the CHAPTR02\Record View folder on the CD that comes with this book.
NOTE The sample program shown in Figure 1.3 has no code added by me (the author). I did add the controls to display the data in the dialog box and bind (existing) variables to these controls. I didn't modify any source files by hand to create this project. All the modifications were done by using the resources editor and ClassWizard working on an application generated by using AppWizard. Perhaps the day of programmerless programming has arrived to C++ programming!
One feature of the program that AppWizard creates is the toolbar with its VCR-style buttons, similar to the record selector buttons of Access's datasheet view. Many database developers prefer to use command buttons with Alt-key combinations for record selection. The majority of the sample applications in this book use Visual C++ code generated by using AppWizard rather than trying to code the database access by hand. Visual C++ provides the following dialog box control objects that you can use in conjunction with dialog boxes:
q
Text box controls are the basic control element for data control objects. You can display and edit data of any field data type, not just text, in a bound text box control. Label controls display the data in a field but don't allow editing of the data. Bound label controls can't receive Windows focus; thus, label controls are useful in decision-support applications in which read-only access to the database is the norm. Check box controls display or update data in fields of the Boolean field data type (called yes/no fields in Access 1.x and logical fields in xBase). The null value is represented by making the check box gray.
Chapter 3, "Using Visual C++ Data Access Functions," provides examples of simple decision-support and transactionprocessing applications that you can create with the data control and bound control objects.
NOTE
file:///H|/0-672-30913-0/vcg01.htm (27 of 33) [14/02/2003 03:05:34 ]
vcg01.htm
Access developers will regret the absence of a Visual C++ equivalent of the drop-down combo box in Access. You need to write a substantial amount of Visual C++ code to duplicate the features of Access's built-in bound combo box. Visual C++ also lacks an equivalent of Access's subforms.
In-place activation of OLE server (source) applications: When you double-click to activate an embedded OLE object in your container (OLE client) application, the server application takes over the window created by your form and substitutes the server application's menus for the menus of your form. (You can activate an OLE object when the OLE control receives the focus by setting the AutoActivate property of the OLE control to 1.) OLE applications create their own editing window when activated. Visual C++ supports in-place activation only with embedded objects. Persistent OLE objects: The data associated with an OLE object ordinarily isn't persistentthat is, the data is no longer accessible after you close a form. The OLE control lets you save the OLE object's data as a persistent object and restore the persistent OLE object from a binary file. (The standard file extension for an OLE object file is, not surprisingly, .OLE.) OLE Automation: OLE Automation gives your OLE controls access to application programming objects by OLE server applications that support OLE Automation. Microsoft Excel, Project, and Word for Windows include OLE Automation capabilities. OLE Automation lets you manipulate data in an OLE object with Visual C++ code; thus, you can place data from a Visual C++ database application into an Excel worksheet directly rather than by using DDE.
The first commercial product to support OLE was CorelDRAW! 4.0, which Corel Systems released after the first version of Visual C++ appeared. The lack of OLE-compliant applications caused the description of OLE features in the Visual C++ documentation to be sketchy at best. The OLE sample applications provide you with little assistance when you want to add OLE features to your Visual C++ applications. To fill this gap, the following sections provide an introduction to OLE Automation. Chapter 17, "Using OLE Controls and Automation with Visual C++ Applications," includes sample applications that demonstrate OLE features that are especially useful for database applications.
NOTE
vcg01.htm
One of the best books on OLE is Kraig Brockschmidt's Inside OLE 2, Second Edition (Microsoft Press, 1995). This book is universally considered to be the bible of OLE programmers. Microsoft Press also publishes a two-volume reference on OLE called the OLE Programmer's Reference (1994). This book, and Brockschmidt's, were published electronically on the MSDN CD in early 1995; however, they are no longer available on CD.
NOTE When using OLE under Windows 3.1, you need to use the DOS TSR application SHARE.EXE prior to loading Windows. Specify at least 500 available locks with a SHARE /l:500 statement in your AUTOEXEC.BAT file. Windows 95 and the enhanced mode of Windows for Workgroups 3.1 and later install a driver, VSHARE.386, that substitutes for and disables SHARE.EXE. Thus, if you need SHARE.EXE only for applications that you run under Windows for Workgroups 3.1+, you don't need to (and therefore shouldn't) load SHARE.EXE.
OLE Automation
Visual C++ lets you create applications that orchestrate interprocess communication among Windows applications without requiring that you suffer through the coding and testing of DDE operations. In the language of OLE, Visual C++ is called an external programming tool. OLE Automation programming tools let you do the following:
q
Create new objects of the object classes supported by OLE Automation source applications. Manipulate existing objects in OLE Automation source and container applications. Obtain and set the values of properties of objects. Invoke methods that act on the objects.
Prior to Visual C++ and OLE Automation, you could link or embed source documents in a destination document created by the OLE control, but you couldn't use Visual C++ code to edit the source document. DDE was the only practical method of programmatically altering data in an object created by another application. (Programmatically is an adverb recently added to computerese by Microsoft. It refers to the capability of manipulating an object with program code.) The following list explains the principal advantages of the use of OLE Automation to replace DDE for applications that require IPC (interprocess communication):
q
OLE places an object-oriented shell around IPC applications. You can manipulate objects in OLE Automation
vcg01.htm
applications as if they were objects of your Visual C++ application. You also can manipulate an OLE object that is embedded in or linked to an OLE control object contained in a Visual C++ form.
q
You create a new object in an OLE source application by declaring an object. After you've created an object variable, you can change the value of each object property that isn't read-only and apply any of the methods that are applicable to the object with statements in your Visual C++ code. Methods usually include all of the application's menu choices, plus the equivalents of other statements or functions in the application's macro language. Thus, OLE Automation lets you substitute an Excel worksheet for a Visual C++ grid control in your database applications. As mentioned earlier in this chapter, you can create a persistent copy of an OLE object by saving the value of the object's Data property to an .OLE file. Later, you can retrieve the object by reading the data from the .OLE file into an OLE control.
OLE Automation offers the most significant opportunity for the improvement of Windows applications since Microsoft introduced OLE 1.0 with Windows 3.1. The majority of the major software publishers have announced their intention to support OLE, but few firms other than Microsoft have committed to dates when such products will reach the shelves of software retailers. OLE 1.0 proved difficult to implement, and creating OLE applications is an even more challenging task. At the time this book was written, Symantec's C++ product and Borland's C++ 4.5 were a few of the programming tools to compete with Visual C++. At the present, only Microsoft's Windows applications offer you the sizable benefits of OLE Automation. OLE Automation is expected to be a feature of future versions of other popular Microsoft applications, such as Access and PowerPoint. By the end of 1995, virtually all mainstream Windows applications implemented OLE Automation. Much of the adoption of OLE has been forced by Microsoft, which requires all certified Windows 95 applications to be fully OLE-compliant, if applicable.
Why am I mentioning Visual Basic in a book on Visual C++? Mostly for background. It's likely that you have some background in Visual Basic or that you're interested in Visual Basic's relationship to Microsoft products. Also, Visual Basic for Applications is the OLE automation language. Bill Gates, chairman and chief executive officer of Microsoft, decreed in 1991 that all of Microsoft's mainstream applications for Windows would share a common macro language (CML) derived from BASIC. His pronouncement wasn't surprising because Microsoft's first product was a BASIC interpreter for the original personal computers that used the Intel 8080 as their CPU. Microsoft's QuickBASIC and QBasic products ultimately became the most widely used flavors of structured BASIC of the world's IBM-compatible PCs. Word for Windows 1.0's Word BASIC was the first macro language for a Microsoft application that used a dialect of BASIC. No other Microsoft application adopted BASIC as its macro language until Microsoft released Access 1.0 in November 1992. Access, however, had its own macro language that wasn't derived from BASIC, so Microsoft called Access Basic an application programming language. Access Basic is a direct descendant of Visual Basic 2.0 that introduced object variables to the Visual Basic language. Access Basic was originally called "Embedded Basic." You see occasional references to "EB" and "Cirrus Basic" in Access 1.0 help files and add-in library code. Cirrus was the code name for Access during its beta-testing period.
vcg01.htm
Visual Basic for Applications is an OLE Automation programming tool classified as an embedded macro language. Visual Basic for Applications is based on Visual Basic and offers many of Visual Basic's capabilities. The structure and syntax of Visual Basic for Applications code is very similar to that of Visual Basic. Following are some of the most significant differences you'll find between Visual C++ and Visual Basic for Applications:
q
All Visual Basic for Applications code is contained in one or more modules stored within the application. Excel stores code modules in a workbook. You create an Excel 7 module by choosing Insert | Macro | Module. Like Word Basic macros, all the functions and procedures in a module appear consecutively rather than in the individual windows employed by the Visual Basic and Visual Basic for Applications code editors. Most applications execute the entry point by selecting the macro from a list box of the dialog box that appears when you choose Tools | Macros. To prevent subprocedures from appearing in the macro list box, you preface Sub ProcName with the Private reserved word. There is no Declarations section in a Visual Basic for Applications module. You declare Global and module-level variables and constants at the beginning of a module, before the first function or procedure you write. After you open a module, you can use the Object Browser to display the objects that are exposed by an OLE Automation application and thus are available to your application. Figure 1.4 shows the Object Browser dialog box for Visual Basic for Applications opened over an Excel module containing demonstration code. Object Browsers are another class of OLE Automation programming tools.
Figure 1.4. Excel 5's module editing window displaying the Object Browser dialog.
q
You can use Visual Basic for Applications to reconstruct the menu choices of applications and to create custom toolbars. The smiley-face button that appears at the left of the top toolbar in Figure 1.4 is added with a Visual Basic for Applications procedure. Windows DLLs that incorporate OLE Automation code expose functions that appear in the Object Browser dialog. You don't need to use the Declare Function or Declare Sub statements to use OLE Automation functions in OLE-compliant DLLs. Visual Basic for Applications supports no visual objects of its own, such as forms. The only exceptions are Windows message boxes and input boxes. The OLE Automation application itself must provide a window to display other control objects, such as text boxes and command buttons. Excel, for example, provides a Dialogs collection of Dialog objects that constitute Excel's built-in dialog boxes, and it provides a DialogSheet object that can contain a custom-designed dialog box that includes default OK and Cancel buttons. Each Workbook object can contain a DialogSheets collection. You create an Excel dialog sheet by choosing Insert | Macro | Dialog. Figure 1.5 shows the design-mode and run-mode appearance of an Excel 5 dialog box with typical control objects.
Figure 1.5. The design-mode and run-mode versions of an Excel dialog sheet.
q
You can declare any object that is exposed by an OLE Automation application as an object variable of the appropriate class. For example, you can declare an Excel worksheet as an object of the class Worksheets. Other Excel object classes are Application and Range. You have access to each of the properties and can apply any of the methods of the application object. You can
vcg01.htm
apply the Cells method to any of the Excel object classes to return a collection of cells contained in the object.
q
Property Let ProcName...End Property procedures assign the values of properties of objects, and Property Get FunctionName...End Property functions return the values of properties of objects. The structure of Property procedures and functions is identical to conventional Function FunctionName...End Function and Sub ProcName...End Sub procedures. Visual Basic for Applications runs on both Intel 80x86 and Macintosh computers. You need to make only minor changes to your code to port a Visual Basic for Applications program from the PC to the Mac. You declare and use functions in Macintosh code resources and Windows dynamic link libraries with the same Visual Basic syntax.
During the development of Visual Basic for Applications (when its code name was Object Basic), Microsoft reportedly was willing to license Visual Basic for Applications to other software publishers for incorporation in their applications. Subsequently, Microsoft announced that Visual Basic for Applications would remain a proprietary Microsoft product and would be available only as a component of Microsoft applications for Windows. Lotus now provides a common programming interface to its products. Lotus also is committed to supporting OLE in its products. Lotus Notes will prove to be a formidable competitor in the next few years. Lotus is working to become compatible with standard languages, which will allow database programmers to leverage their existing programming skills.
Summary
This chapter covered the process of choosing Visual C++ as a database development tool, using Visual C++ as a database front-end generator, migrating from the more traditional database programming languages, the MFC ODBC and DAO classes, and OLE. This chapter also gave you an overview of Visual C++'s capabilities as a database development platform and how Microsoft plans to use Visual C++, OLE Automation, and Visual Basic for Applications to cement the firm's leadership position in the Windows desktop database market. You don't need to be clairvoyant to conclude that the Macintosh version of Visual C++ (actually a cross compiler) will emerge as a major player in the Macintosh world in the future, together with an Access database engine designed to run as a Macintosh code resource. No matter what your opinions are relating to Microsoft's predominance in the Windows and Macintosh applications markets and the methods Microsoft has used to achieve their present market share, the Microsoft desktop database juggernaut is a fact. Developers of traditional character-based database applications in xBase or PAL who don't face this fact will find a rapidly diminishing market for their services in the mid- to late 1990s. The remaining two chapters in the Part I of this book give you the basic details you need to use Visual C++'s data access objects, the CFormView object, and bound control objects to create simple Visual C++ database applications that display and edit data contained in Access databases. Even accomplished Visual C++ developers should scan the next two chapters, because Visual C++'s data access objects differ somewhat from the other Visual C++ MFC objects when used in an AppWizard-generated application.
vcg01.htm
vcg02.htm
s s
Defining the Characteristics of Data-Related Objects and Classes The MFC Database Classes s CDatabase s Data Members s Construction s Database Attributes s Database Operations s Database Overridables s CRecordset s Data Members s Construction/Destruction s Recordset Attributes s Recordset Update Operations s Recordset Navigation Operations s Other Recordset Operations s Recordset Overridables s CRecordView s Construction s Attributes s Operations s CFieldExchange s CDBException s Data Members s CLongBinary s Data Members s Construction Member s RFX Functions s An AppWizard-Generated Program s CRecordView Support s CRecordset Support Summary
vcg02.htm
The MFC data access classes are Visual C++'s object-oriented method of interacting with datasources. MFC's implementation of ODBC supports three major objects: CDatabase, CRecordView, and CRecordset. Supporting classes include CDBException, CFieldExchange, and CLongBinary. Most commonly, programmers use these objects when working with applications created with Visual C++'s AppWizard program. Any database application created by using AppWizard will incorporate these classes. Chapter 1 introduced Visual C++ and accessing databases. This chapter describes the structure of the MFC data access classes in detail because the member functions of these classes constitute the foundation on which all of your Visual C++ MFC database applications are built. This chapter features examples that use the member functions to create Visual C++ code. By the time you complete this rather lengthy chapter, it's very likely that you will have learned more than you ever wanted to know about data-related objects and classes! Programmers who want to "roll their own" and use the database classes will have few (if any) problems incorporating them into their applications. However, for a simple front-end application in which data access and updating are the main functions of the program, using AppWizard is the simplest choice. The sample program shown in Figure 2.1 took only about 10 minutes to write and required no manual source code modification to create. The source code for this program is on the CD that comes with this book (see the directory CHAPTR02\Record View). Take a look at the program to see how simple it is to create a quick ODBC application. Figure 2.1. A CRecordView-based application.
NOTE Also found on the CD is a 16-bit MFC 2.5 version of the same program, which is in the directory CHAPTR02\RECVIEW. The RECVIEW program should be built only by using Visual C++ 1.5x.
NOTE
vcg02.htm
Technically, you should be able to alter any property of a programmable object by assigning an appropriate value to the "set" member of the function pair. The ability to set property values under specific conditions depends on the type of object and the application in which the object is used. Access 1.x, for example, has many objects whose properties can be set only in design mode.
CDatabase
The CDatabase class object is used to encapsulate a connection to a database. The CDatabase object may then be used to operate on the database and is derived from the CObject base class. Figure 2.2 shows the class hierarchy for the CDatabase class. Figure 2.2. The CDatabase class hierarchy. The CDatabase class object has a number of member functions. These functions are divided into the following five categories:
q
Data members: The data members of the CDatabase class hold information that is used when you're working directly with the database that the CDatabase object has been attached to. Construction: The constructor and a set of database open/close functions form the construction members. Database attributes: Nine functions are used to obtain information about the database that the CDatabase object has been attached to. Database operations: The five database operation functions allow for transaction processing and the execution of direct SQL commands. Database overrides: Two overridable functions are provided to let the programmer customize the functionality of
vcg02.htm
The following sections take a closer look at the members of this class. The members of the CObject class (which CDatabase is derived from) aren't covered in this book. Refer to the Visual C++ documentation (either the manuals or the online help system) for full information about the CObject class.
Data Members
The CDatabase object contains only one data member. The m_hdbc member variable contains the ODBC connection handle to the database that is currently open. If no database is currently open, this member variable doesn't contain useful data. The m_hdbc member variable is of type HDBC. It can be used wherever an HDBC type variable is needed (for example, in one of the SQL...() functions). Here's an example of the usage of m_hdbc:
nReturnCode = ::SQLGetInfo(cdb.m_hdbc, SQL_ODBC_SQL_CONFORMANCE, &nReturn, sizeof(nReturn), &cbValue); In this example, a call to a function that isn't a member function of CDatabase is made by using the m_hdbc member data variable.
Construction
Three member functions deal directly with CDatabase construction: CDatabase(), Open(), and Close(). There also is a default destructor, which I don't document here because it's never called by an application. The following paragraphs describe each construction member function and, where applicable, give examples of usage. The CDatabase() function is used to construct the CDatabase object. This function isn't called directly and has no parameters. The process of calling the constructor is taken care of during the initialization of the CDatabase object when it's created. Here is a typical creation of a CDatabase object (this code is usually in the header file for the document class):
CDatabase m_dbCustomerDB;
// No parameters
When creating a CDatabase object, your application must make sure that the CDatabase object is connected to a database. This is accomplished in a member function in the containing class, often called GetDatabase() (if the containing class is
file:///H|/0-672-30913-0/vcg02.htm (4 of 54) [14/02/2003 03:05:40 ]
vcg02.htm
based on a CDocument type object). If you call CDatabase::Open(), passing a NULL as the lpszDSN parameter, the user will be presented with an open datasource dialog box. The Record View sample program for this chapter shows this dialog box (see Figure 2.1).
CDatabase* CMyDoc::GetDatabase() {// Returns NULL in the event of a failure! // m_dbCustomerDB is a member of CMyDoc! // Connect the object to a database if(!m_dbCustomerDB.IsOpen() && !m_dbCustomerDB.Open(NULL)) (// The database cannot be opened; we've failed! return(NULL); } else {// We already had a database, or opened one: return(&m_dbCustomerDB); } } The Open() member function is used to establish a connection to a database. This connection is established through an ODBC driver. The Open() function takes a number of parameters. Here's the prototype of the MFC 2.5 version of the Open() function:
BOOL Open( LPCSTR lpszDSN, BOOL bExclusive = FALSE, BOOL bReadOnly = FALSE, LPCSTR lpszConnect = "ODBC;");
// The name of the dataset // If the dataset is to be exclusive // If the dataset is read-only // The method of connection
The prototype of the MFC 3.0 (and later) versions of Open() function adds a new final parameter to the function:
BOOL Open( LPCSTR lpszDSN, BOOL bExclusive = FALSE, BOOL bReadOnly = FALSE,
// The name of the dataset // If the dataset is to be exclusive // If the dataset is read-only
vcg02.htm
The return value will be nonzero if the function is successful and zero if the user clicks the Cancel button in the Connection Information dialog box (if displayed). All other failures will cause the Open() function to throw an exception of type CDBException or CMemoryException. The Close() function is used to close the connection that was established with the Open() function. The Close() function takes no parameters and has no return value. If no connection is currently open, this function does nothing. A call to Close() will cancel any pending AddNew() or Edit() statements and will roll back (discard) any pending transactions.
Database Attributes
The database attribute functions are used to provide information to the application about the connection, driver, and datasource. These functions are often used in front-end applications. Other functions in this group set options for the datasource for the application. The following list shows the database attribute functions. The functions in the first column are supported by all datasources, and those in the second column might not be supported by all datasources.
Supported by All Datasources Not Supported by All Datasources GetConnect() IsOpen() GetDatabaseName() CanUpdate() CanTransact() InWaitForDataSource() The GetConnect() function is used to return the ODBC connect string that was used to connect the CDatabase object to a datasource. There are no parameters to the GetConnect() function, and it returns a CString object reference. The GetConnect() function's prototype is SetLoginTimeout() SetQueryTimeout() SetSynchronousMode()
const CString& GetConnect(); If there is no current connection, the returned CString object will be empty. The IsOpen() function is used to determine whether a datasource is currently connected to the CDatabase object. This function returns a nonzero value if there is currently a connection and a zero value if no connection is currently open. For an example of IsOpen(), see the earlier discussion of the Open() function.
vcg02.htm
The GetDatabaseName() function returns the name of the database currently in use. GetDatabaseName() returns a CString object. Its prototype is
CString GetDatabaseName(); The GetDatabaseName() function returns the database name if there is one. Otherwise, it returns an empty CString object. The CanUpdate() function returns a nonzero value if the database can be updated (by either modifying records or adding new records). If the database can't be modified, the CanUpdate() function returns a zero value. CanUpdate() takes no parameters and has the following prototype:
BOOL CanUpdate(); The ability to update a database is based both on how it was opened (how you set the read-only parameter in Open()) and on the capabilities of the ODBC driver. Not all ODBC drivers support the updating of databases. The CanTransact() function returns a nonzero value if the datasource supports transactions. (See the section "Database Operations" for more information about transactions with the CDatabase object.) The CanTransact() function takes no parameters and has the following prototype:
BOOL CanTransact(); The ability to support transactions is based on ODBC driver support. The InWaitForDataSource() function returns a nonzero value if the application is waiting for the database server to complete an operation. If the application isn't waiting for the server, the InWaitForDataSource() function returns a zero. InWaitForDataSource() takes no parameters and has the following prototype:
static BOOL PASCAL InWaitForDataSource(); This function is often called in the framework to disable the user interface while waiting for the server to respond. This is done to prevent the user from stacking unwanted commands or operations while the application waits for the server. The SetLoginTimeout() function is used to set the amount of time that the system will wait before timing out the connection. This option must be set before a call to Open() is made; it will have no effect if it's called after a database has been opened. This function has no return value. SetLoginTimeout() takes one parameterthe number of seconds after which a datasource connection attempt will time out. SetLoginTimeout() has the following prototype:
vcg02.htm
The default login timeout is 15 seconds, an acceptable value for most applications. For applications that might be running on slow systems (perhaps where there are many other connections), the login timeout value might need to be set to a larger value. The SetQueryTimeout() function is used to set the amount of time that the system will wait before timing out the query. This option must be set before you open the recordset. It will have no effect if it's called after the recordset has been opened. This function has no return value. SetQueryTimeout() takes one parameterthe number of seconds after which a datasource connection attempt will time out. SetQueryTimeout() has the following prototype:
void SetQueryTimeout(DWORD dwSeconds); The default query timeout is 15 seconds, an acceptable value for most applications. For applications that might be running on slow systems (perhaps where there are many other connections), the query timeout value might need to be set to a larger value. WARNING Setting the query timeout value to zero results in no time-outs and might cause the application to hang if a connection can't be made. The SetQueryTimeout() function affects all subsequent Open(), AddNew(), Edit(), and Delete() calls. The SetSynchronousMode() function is used to either enable or disable synchronous processing for all recordsets and SQL statements associated with this CDatabase object. SetSynchronousMode() takes one parameter and has no return value. SetSynchronousMode() has the following prototype:
Database Operations
Database operation functions are used to work with the database. The transaction processing functions (used to update the database) and the function used to issue an SQL command are all database operation functions. The database operation functions are BeginTrans() CommitTrans() Rollback()
vcg02.htm
Cancel() ExecuteSQL() With the exception of ExecuteSQL(), these functions might not be implemented by all datasources. The BeginTrans() function is used to start a transaction on a database. Transactions are calls to AddNew(), Edit(), Delete(), or Update(). After the application has completed the transaction calls, either CommitTrans() or Rollback() must be called. The BeginTrans() function takes no parameters and returns a nonzero value if the call is successful. BeginTrans() has the following prototype:
void BeginTrans(BOOL bSynchronousMode); BeginTrans() should never be called prior to opening a recordset; otherwise, there might be problems when calling Rollback(). Each BeginTrans() call must be matched to a CommitTrans() or Rollback() prior to a subsequent call to BeginTrans(), or an error will occur. If there are pending transactions when the datasource is closed, they are discarded, much as if there had been a call to Rollback() prior to closing the datasource. The CommitTrans() function is used to complete a transaction set begun with a call to BeginTrans(). CommitTrans() tells the datasource to accept the changes that were specified. CommitTrans() takes no parameters and returns a nonzero value if the call is successful. CommitTrans() has the following prototype:
BOOL CommitTrans(); You can discard the transaction by calling Rollback(). The Rollback() function is used to end a transaction processing operation, discarding the transaction. Rollback() takes no parameters and returns a nonzero value if the call was successful. Rollback() has the following prototype:
void Rollback(); You can accept the transaction by using the CommitTrans() function. The Cancel() function is used to terminate an asynchronous operation that is currently pending. This function causes the OnWaitForDataSource() function to be called until it returns a value other than SQL_STILL_EXECUTING. Cancel() takes no parameters and has no return value. Cancel() has the following prototype:
void Cancel(); If no asynchronous operation is pending, this function simply returns. The ExecuteSQL() function is used to execute an SQL command. The SQL command is contained in a NULL-terminated
file:///H|/0-672-30913-0/vcg02.htm (9 of 54) [14/02/2003 03:05:41 ]
vcg02.htm
string. A CString object may also be passed to the ExecuteSQL() function if desired. ExecuteSQL() takes one parameter and has no return value. ExecuteSQL() has the following prototype:
void ExecuteSQL(LPCSTR szSQLCommand); The ExecuteSQL() function throws a CDBException if there is an error in the SQL statement. ExecuteSQL() won't return any data records to the application. Use the CRecordset object to obtain records instead.
Database Overridables
The overridable functions OnSetOptions() and OnWaitForDataSource() are used to allow the framework to set options and control the operation of the application. Neither of these functions is mandatory. If the programmer elects not to code these functions, a default operation will take place. The OnSetOptions() function is called when the ExecuteSQL() function is being used to execute an SQL statement. OnSetOptions() takes one parameter and has no return value. OnSetOptions() has the following prototype:
void OnSetOptions(HSTMT hstmt); The default OnSetOptions() function is shown in the following code fragment. You could use this code in your handler as an example of how to code an OnSetOptions() function. The default implementation sets the query timeout value and the processing mode to either asynchronous or synchronous. Your application can set these options prior to the ExecuteSQL() function call by calling SetQueryTimeout() and SetSynchronousMode(). Microsoft uses the calls to AFX SQL SYNC() in its database code.
void CDatabase::OnSetOptions(HSTMT hstmt) { RETCODE nRetCode; ASSERT_VALID(this); ASSERT(m_hdbc != SQL_NULL_HDBC); if (m_dwQueryTimeout != -1) { // Attempt to set query timeout. Ignore failure
AFX_SQL_SYNC(::SQLSetStmtOption(hstmt, SQL_QUERY_TIMEOUT,
vcg02.htm
m_dwQueryTimeout)); if (!Check(nRetCode)) // don't attempt it again m_dwQueryTimeout = (DWORD)-1; } // Attempt to set AFX_SQL_ASYNC. if (m_bAsync) { AFX_SQL_SYNC(::SQLSetStmtOption(hstmt, SQL_ASYNC_ENABLE, m_bAsync)); if (!Check(nRetCode)) m_bAsync = FALSE; } } The OnWaitForDataSource() function is called to allow the application to yield time to other applications while waiting for asynchronous operations. OnWaitForDataSource() takes one parameter and has no return value. OnWaitForDataSource() has the following prototype: Ignore failure
void OnWaitForDataSource(BOOL bStillExecuting); The bStillExecuting parameter is set to TRUE for the first call to OnWaitForDataSource() when it's called prior to an asynchronous operation. The following code fragment shows the default OnWaitForDataSource() function. You could use this code in your handler as an example of how to code an OnWaitForDataSource() function if your application requires one.
void CDatabase::OnWaitForDataSource(BOOL bStillExecuting) { ASSERT_VALID(this); ASSERT(m_hdbc != SQL_NULL_HDBC); _AFX_THREAD_STATE* pThreadState = AfxGetThreadState(); CWinApp* pApp = AfxGetApp();
vcg02.htm
if (!bStillExecuting) { // If never actually waited... if (m_dwWait == 0) return; if (m_dwWait == m_dwMaxWaitForDataSource) pApp->DoWaitCursor(-1); m_dwWait = 0; pThreadState->m_bWaitForDataSource--; #ifdef _DEBUG if (afxTraceFlags & traceDatabase) TRACE0("DONE WAITING for datasource.\n"); #endif return; } if (m_dwWait == 0) { pThreadState->m_bWaitForDataSource++; // 1st call; wait for min amount of time m_dwWait = m_dwMinWaitForDataSource; #ifdef _DEBUG if (afxTraceFlags & traceDatabase) TRACE0("WAITING for datasource.\n"); #endif } else { if (m_dwWait == m_dwMinWaitForDataSource)
file:///H|/0-672-30913-0/vcg02.htm (12 of 54) [14/02/2003 03:05:41 ]
// EndWaitCursor
vcg02.htm
{ // 2nd call; wait max time; put up wait cursor m_dwWait = m_dwMaxWaitForDataSource; pApp->DoWaitCursor(1); } } CWinThread* pThread = AfxGetThread(); DWORD clockFirst = GetTickCount(); while (GetTickCount() - clockFirst < m_dwWait) { MSG msg; if (::PeekMessage(&msg, NULL, NULL, NULL, PM_NOREMOVE)) { TRY { pThread->PumpMessage(); } CATCH_ALL { TRACE0("Error: exception in OnWaitForDataSource - continuing.\n"); DELETE_EXCEPTION; } END_CATCH_ALL } else pThread->OnIdle(-1); } // BeginWaitCursor
vcg02.htm
CRecordset
The CRecordset object is used to manage recordsets. This object is often used with the CDatabase and CRecordView objects. The member functions in the CRecordset object offer a powerful set of database record manipulation tools. The CRecordset object is derived from the CObject base class. Figure 2.3 shows the class hierarchy for the CRecordset class. Figure 2.3. The CRecordset class hierarchy. The CRecordset class object has a number of member functions. These functions are divided into the following seven categories:
q
Data members: The data members of the CRecordset class hold information that is used when you're working directly with the database that the CRecordset object has been attached to. Construction: The constructor and a set of database open/close functions form the construction members. Recordset attributes: Thirteen functions are used to obtain information about the recordset that the CRecordset object has been attached to. Recordset update operations: The four CRecordset update operation members allow for transaction processing. Recordset navigation operations: The five CRecordset navigation operation functions allow for moving throughout the records contained within the recordset. Other recordset operations: The eight other CRecordset operation functions provide miscellaneous functionality. Recordset overrides: Five overridable functions are provided to let the programmer customize the functionality of the CRecordset object.
The following sections take a closer look at the members of this class. I don't cover the members of the CObject class (which CRecordset is derived from) in this book. Refer to the Visual C++ documentation (either the manuals or the online help system) for full information about the CObject class.
Data Members
The m_hstmt member variable contains the ODBC statement handle for the recordset. This variable has a type of
vcg02.htm
HSTMT.
q
The m_nFields member variable contains the number of field data members (the number of columns retrieved from) in the recordset. This variable has a type of UINT. The m_nParams member variable contains the number of parameter data members in the recordset. This variable has a type of UINT. The m_strFilter variable contains a CString that contains an SQL WHERE clause. This CString will be used as a filter to select only records that meet the specified search criteria. The m_strSort variable contains a CString that contains an SQL ORDER BY clause. This CString will be used to control the sorting of the retrieved records.
Construction/Destruction
Three member functions deal directly with CRecordset construction: CRecordset(), Open(), and Close(). There also is the default destructor, which I won't document here because it's never called by an application. The following paragraphs describe each construction member function and, where applicable, give examples of usage. The CRecordset() function is the constructor for the CRecordset class object. CRecordset() takes one parameter, and, because it's a constructor, it has no specified return value. CRecordset() has the following prototype:
void CRecordset(CDatabase * pDatabase = NULL); The default operation is to create and initialize the CRecordset object. If pDatabase is specified, this CDatabase object will be used with the CRecordset object. If the pDatabase pointer is NULL, the constructor will create a default CDatabase member class. If you create a derived class, the derived class must have its own constructor. Your constructor will then call the CRecordset::CRecordset() constructor, passing the appropriate parameter. The Open() function is used to run a query that will return a recordset to the application. Open() takes three parameters and has no return value. Open() has the following prototype:
virtual BOOL Open( UINT nOpenType = snapshot, LPCSTR lpszSql = NULL, DWORD dwOptions = none); // Either dynaset, snapshot, or forwardOnly // NULL, table name, SELECT, or CALL statement // None, appendOnly, or readOnly
vcg02.htm
The default operation for Open() is to open a datasource. Open() will throw a CDBException, CMemoryException, or CFileException if there are errors. The Close() function is used to close the currently open recordset. If no recordset is open, this function simply returns. After calling Close(), it's possible to then re-call Open() to reopen the recordset, thereby reusing the CRecordset object. The Close() function takes no parameters and has no return value. Close() has the following prototype:
void Close(); The default operation for Close() is to close the recordset and the ODBC HSTMT that was associated with the recordset.
Recordset Attributes
Thirteen member functions deal directly with CRecordset attributes. These member functions are listed here: CanAppend() CanRestart() CanScroll() CanTransact() CanUpdate() GetRecordCount() GetStatus() GetTableName() GetSQL() IsOpen() IsBOF() IsEOF() IsDeleted() With these member functions, applications can obtain information about the recordset.
vcg02.htm
The CanAppend() function is used to determine whether or not new records can be appended to the end of the recordset. Records are added by using the AddNew() function. CanAppend() takes no parameters and returns a nonzero value if the recordset can have records appended. CanAppend() has the following prototype:
BOOL CanAppend(); Typically, CanAppend() is called to enable or disable the user interface's record append commands and tools. The CanRestart() function is used to determine whether the query can be restarted. CanRestart() takes no parameters and returns a nonzero value if the query can be restarted. CanRestart() has the following prototype:
BOOL CanRestart(); The CanRestart() function is usually called prior to calling the Requery() member function. The CanScroll() function is used to determine whether the recordset allows scrolling. CanScroll() takes no parameters and returns a nonzero value if the recordset allows scrolling. CanScroll() has the following prototype:
BOOL CanScroll();
The CanTransact() function is used to determine whether the recordset supports transactions. CanTransact() takes no parameters and returns a nonzero value if transactions are supported. CanTransact() has the following prototype:
BOOL CanTransact();
The CanUpdate() function is used to determine whether the recordset supports updating. Updating would typically fail if the underlying database were opened in read-only mode. CanUpdate() takes no parameters and returns a nonzero value if the recordset supports updating. CanUpdate() has the following prototype:
vcg02.htm
BOOL CanUpdate(); The most common reason that a recordset can't be updated when the ODBC driver supports updating is that it has been opened in read-only mode. Read-only mode offers faster access (there is no need to perform record locking) at the expense of being able to update the recordset. The GetRecordCount() function is used to determine the number of records in the current recordset. GetRecordCount() takes no parameters and returns the number of records in the recordseta 1 value if the number of records can't be determined and a zero value if there are no records in the recordset. GetRecordCount() has the following prototype:
long GetRecordCount(); The number of records in a recordset can be determined only if the application scrolls through the entire recordset. The count of records is maintained as a counter that is incremented with each forward read. The true total number of records is known only after the application has scrolled past the last record. Using MoveLast() won't affect the record counter. The GetStatus() function is used to obtain status information about the current recordset. GetStatus() takes one parameter, a reference to the CRecordsetStatus structure, and has no return value. GetStatus() has the following prototype:
void GetStatus(CRecordsetStatus & rsStatus); The members of the CRecordsetStatus class are shown in the following code fragment:
struct CRecordsetStatus { long m_lCurrentRecord; // Zero-based index of current record // if the current record is known, or // AFX_CURRENT_RECORD_UNDEFINED if the // current record is undefined. BOOL m_bRecordCountFinal; // Nonzero if the total number of records // in the recordset has been determined. }; The GetTableName() function is used to fetch the name of the recordset table. GetTableName() takes no parameters and returns a CString reference. GetTableName() has the following prototype:
vcg02.htm
CString & GetTableName(); The CString returned won't contain a name if the recordset was based on a join or if the recordset was created by a call to a stored procedure. The GetSQL() function is used to return a CString reference that contains the current SQL statement. The SQL statement is the SELECT statement used to generate the recordset. GetSQL() takes no parameters and returns a CString reference. GetSQL() has the following prototype:
CString & GetSQL(); The returned SQL string usually will have been modified by the system to include any filtering (a WHERE clause) and sorting (an ORDER BY clause). The IsOpen() function is used to determine whether the CRecordset Open() or Requery() functions have been called and whether the recordset has been closed. IsOpen() takes no parameters and returns a nonzero value if there has been a call to Open() or Requery() without an intervening call to Close(). IsOpen() has the following prototype:
BOOL IsOpen(); Your application should check the IsOpen() function prior to calling Open(). The IsBOF() function is used to check whether the current record is the first record in the dataset. IsBOF() takes no parameters and returns a nonzero value if the recordset is empty or if the application has scrolled to before the first record in the recordset. IsBOF() has the following prototype:
BOOL IsBOF();
CAUTION The IsBOF() function should be called prior to scrolling backward in a recordset. Scrolling backward when there are no records in the recordset or when the current record pointer is before the first record in the recordset causes an error.
The IsEOF() function is used to determine whether the current record is the last record in the dataset. IsEOF() takes no parameters and returns a nonzero value if the recordset is empty or if the application has scrolled to after the last record in the recordset. IsEOF() has the following prototype:
BOOL IsEOF();
file:///H|/0-672-30913-0/vcg02.htm (19 of 54) [14/02/2003 03:05:41 ]
vcg02.htm
CAUTION The IsEOF() function should be called prior to scrolling forward in a recordset. Scrolling forward when there are no records in the recordset or when the current record pointer is after the last record in the recordset causes an error.
The IsDeleted() function is used to determine whether the current record in the recordset has been deleted. IsDeleted() takes no parameters and returns a nonzero value if the current record has been marked as deleted. IsDeleted() has the following prototype:
BOOL IsDeleted();
CAUTION It's considered an error to update or delete a record that has been marked as deleted.
Four member functions deal directly with CRecordset updating: AddNew() Delete() Edit() Update() With these member functions, applications can add, delete, and edit records in the recordset. The AddNew() function is used to prepare a new record to be added to the recordset. This record's contents must then be filled in by the application. After the new record's contents are filled in, Update() should be called to write the record. AddNew() takes no parameters and has no return value. AddNew() has the following prototype:
void AddNew();
file:///H|/0-672-30913-0/vcg02.htm (20 of 54) [14/02/2003 03:05:41 ]
vcg02.htm
The AddNew() function throws a CDBException or a CFileException if an error occurs (such as trying to add records to a dataset that is read-only). AddNew() can be used as part of a transaction if the dataset supports transactions. The Delete() function is used to delete the current record from the recordset. After calling Delete(), you must explicitly scroll to another record. Delete() takes no parameters and has no return value. Delete() has the following prototype:
void Delete(); The Delete() function will throw a CDBException if an error occurs (such as trying to delete records in a dataset that is read-only). Delete() can be used as part of a transaction if the dataset supports transactions. The Edit() function is used to prepare the current record for editing. The Edit() function will save the current record's current values. If you call Edit(), make changes, and then call Edit() a second time (without a call to Update()), the changes will be lost, and the record will be restored to the original values. Edit() takes no parameters and has no return value. Edit() has the following prototype:
void Edit(); The Edit() function throws a CDBException if an error occurs. Edit() can be used as part of a transaction if the dataset supports transactions. The Update() function is used to write the record that has been added to or edited by other recordset update operations. Update() takes no parameters and returns a nonzero value if a record was actually updated or a zero if no records were updated. Update() has the following prototype:
BOOL Update(); The Update() function throws a CDBException if an error occurs. Update() can be used as part of a transaction if the dataset supports transactions.
Five member functions deal directly with CRecordset record navigation: Move() MoveFirst() MoveLast()
vcg02.htm
MoveNext() MovePrev() You should also refer to the IsBOF() and IsEOF() functions, described in the section "Recordset Attributes." With these member functions, applications can move forward, backward, to a specific record, to the beginning of a recordset, and to the end of a recordset. The Move() function is used to move to a specific record in the recordset, relative to the current record. This function allows random movement in the recordset. Move() takes one parameter and has no return value. Move() has the following prototype:
void Move(long lRows); Use a negative parameter value to move backward from the current record. The Move() function throws a CDBException, CFileException, or CMemoryException if it fails. WARNING Don't call any move function for a recordset that doesn't have any records (if both IsEOF() and IsBOF() return nonzero, the recordset is empty). The MoveFirst() function is used to move to the first record in the recordset. MoveFirst() takes no parameters and has no return value. MoveFirst() has the following prototype:
void MoveFirst(); The MoveFirst() function throws a CDBException, CFileException, or CMemoryException if it fails. The MoveLast() function is used to move to the last record in the recordset. MoveLast() takes no parameters and has no return value. MoveLast() has the following prototype:
void MoveLast(); The MoveLast() function throws a CDBException, CFileException, or CMemoryException if it fails. The MoveNext() function is used to move to the next record in the recordset. If you're positioned after the last record in the recordset, don't call MoveNext(). MoveNext() takes no parameters and has no return value. MoveNext() has the following prototype:
void MoveNext();
vcg02.htm
The MoveNext() function throws a CDBException, CFileException, or CMemoryException if it fails. The MovePrev() function is used to move to the previous record in the recordset. If you're positioned before the first record in the recordset, don't call MovePrev(). MovePrev() takes no parameters and has no return value. MovePrev() has the following prototype:
void MovePrev(); The MovePrev() function throws a CDBException, CFileException, or CMemoryException if it fails.
Eight member functions deal directly with CRecordset operations: Cancel() IsFieldDirty() IsFieldNull() IsFieldNullable() Requery() SetFieldDirty() SetFieldNull() SetLockingMode() With these member functions, applications can perform miscellaneous operations on recordsets. The Cancel() function is used to cancel a pending asynchronous operation. Cancel() takes no parameters and has no return value. Cancel() has the following prototype:
void Cancel(); The default operation, should there be no pending asynchronous operation, is simply to return. The IsFieldDirty() function is used to determine whether a specified field has been changed. IsFieldDirty() takes one parametera pointer to a field data memberand returns a nonzero value if the field has, in fact, been modified.
file:///H|/0-672-30913-0/vcg02.htm (23 of 54) [14/02/2003 03:05:41 ]
vcg02.htm
BOOL IsFieldDirty(void * pField); If the pField pointer parameter is NULL, all fields in the record are checked. The IsFieldNull() function is used to determine whether a specified field is currently null (contains no value). IsFieldNull() takes one parametera pointer to a field data memberand returns a nonzero value if the field is, in fact, null. IsFieldNull() has the following prototype:
BOOL IsFieldNull(void * pField); If the pField pointer parameter is NULL, all fields in the record are checked. Note that the C/C++ NULL is different from the SQL null. The IsFieldNullable() function is used to determine whether a specified field can be set to null (containing no value). IsFieldNullable() takes one parametera pointer to a field data memberand returns a nonzero value if the field can be set to null. IsFieldNullable() has the following prototype:
BOOL IsFieldNullable(void * pField); If the pField pointer parameter is NULL, all fields in the record are checked. Note that the C/C++ NULL is different from the SQL null. The Requery() function is used to refresh the recordset. A call to the function CanRestart() should be made prior to calling Requery(). Requery() takes no parameters and returns a nonzero value if the refresh was successful. Requery() has the following prototype:
BOOL Requery(); The Requery() function throws a CDBException, CFileException, or CMemoryException if it fails. The SetFieldDirty() function is used to modify the dirty flag for a specified field. SetFieldDirty() takes two parametersa pointer to a field data member and a Boolean value specifying the new value for the dirty flag. SetFieldDirty() has no return value and has the following prototype:
void SetFieldDirty(void * pField, BOOL bDirty = TRUE); If the pField pointer parameter is NULL, all fields in the record are marked with the value of the bDirty parameter. Note that the C/C++ NULL is different from the SQL null.
file:///H|/0-672-30913-0/vcg02.htm (24 of 54) [14/02/2003 03:05:41 ]
vcg02.htm
The SetFieldNull() function is used to modify the null flag for a specified field. SetFieldNull() takes two parametersa pointer to a field data member and a Boolean value specifying the new value for the dirty flag. SetFieldNull() has no return value and has the following prototype:
void SetFieldNull(void * pField, BOOL bDirty = TRUE); If the pField pointer parameter is NULL, all fields in the record are marked with the value of the bDirty parameter. Note that the C/C++ NULL is different from the SQL null. The SetLockingMode() function is used to change the record locking mode. SetLockingMode() takes one parameternMode, which must be either optimistic or pessimistic. SetLockingMode() has no return value and has the following prototype:
void SetLockingMode(UINT nMode); The pessimistic mode is more cautious than optimistic mode. Both pessimistic and optimistic are defined in CRecordset. pessimistic mode locks the record as soon as Edit() is called, and optimistic mode locks the record only while the update is being performed.
Recordset Overridables
Applications may override five members to allow control over the recordset: DoFieldExchange() GetDefaultConnect() GetDefaultSQL() OnSetOptions() OnWaitForDataSource() The DoFieldExchange() function is used to transfer data to and from the field variables and records in the recordset. If your application is built with AppWizard, a default DoFieldExchange() function will be created. Also, modifications to the AppWizard-created DoFieldExchange() will be done by ClassWizard. DoFieldExchange() takes one parameter and has no return value. DoFieldExchange() has the following prototype:
vcg02.htm
The CFieldExchange class object definition is shown in the following code fragment. The actual definition can be found in the AFXDB.H header file.
// CFieldExchange - for field exchange class CFieldExchange { // Attributes public: enum RFX_Operation { BindParam, // Register user's parameters with ODBC SQLBindParameter RebindParam, // Migrate param values to proxy array before requery BindFieldToColumn, // Register user's fields with ODBC SQLBindCol BindFieldForUpdate, // Temporarily bind columns before // update (via SQLSetPos) UnbindFieldForUpdate, // Unbind columns after update (via SQLSetPos) Fixup, // Set string lengths and clear status bits MarkForAddNew, MarkForUpdate, // Prepare fields and flags for update operation
Name, // Append dirty field name NameValue, // Append dirty name=value Value, // Append dirty value or parameter marker SetFieldDirty, // Set status bit for changed status SetFieldNull, // Set status bit for null value
IsFieldDirty, // Return TRUE if field is dirty IsFieldNull, // Return TRUE if field is marked NULL IsFieldNullable, // Return TRUE if field can hold NULL values StoreField, // Archive values of current record
vcg02.htm
LoadField,
// Reload archived values into current record // General info on a field via pv for field // General info on a field via field ordinal
GetFieldInfoValue, GetFieldInfoOrdinal, #ifdef _DEBUG DumpField, #endif }; UINT m_nOperation; CRecordset* m_prs; // Operations enum FieldType { noFieldType, outputColumn, param, };
// Operations (for implementors of RFX procs) BOOL IsFieldType(UINT* pnField); // Indicate purpose of subsequent RFX calls void SetFieldType(UINT nFieldType); // Implementation CFieldExchange(UINT nOperation, CRecordset* prs, void* pvField = NULL); void Default(LPCTSTR szName, void* pv, LONG* plLength, int nCType, UINT cbValue, UINT cbPrecision); int GetColumnType(int nColumn, UINT* pcbLength = NULL, int* pnScale = NULL, int* pnNullable = NULL); // Long binary helpers long GetLongBinarySize(int nField);
file:///H|/0-672-30913-0/vcg02.htm (27 of 54) [14/02/2003 03:05:41 ]
vcg02.htm
void GetLongBinaryData(int nField, CLongBinary& lb, long* plSize); BYTE* ReallocLongBinary(CLongBinary& lb, long lSizeRequired, long lReallocSize); // Current type of field UINT m_nFieldType; // For GetFieldInfo CFieldInfo* m_pfi; // GetFieldInfo return struct
BOOL m_bFieldFound; // GetFieldInfo search successful // For returning status info for a field BOOL m_bNull; BOOL m_bDirty; CString* m_pstr; // Return result of IsFieldNull(able)/Dirty operation // Return result of IsFieldNull(able)/Dirty operation // Field name or destination for building various SQL // clauses BOOL m_bField; void* m_pvField; CArchive* m_par; // Value to set for SetField operation // For indicating an operation on a specific field // For storing/loading copy buffer
LPCTSTR m_lpszSeparator; // Append after field names UINT m_nFields; UINT m_nParams; UINT m_nParamFields; HSTMT m_hstmt; // Count of fields for various operations // Count of fields for various operations // Count of fields for various operations // For SQLBindParameter on update statement // For fetching CLongBinary data of // unknown length long m_lDefaultLBReallocSize; // For fetching CLongBinary data of // unknown length #ifdef _DEBUG CDumpContext* m_pdcDump;
long m_lDefaultLBFetchSize;
vcg02.htm
#endif //_DEBUG }; A typical AppWizard-created DoFieldExchange() function is shown in the following code fragment. This example is from the sample program shown in Figure 2.1.
void CRecordViewSet::DoFieldExchange(CFieldExchange* pFX) { //{{AFX_FIELD_MAP(CRecordViewSet) pFX->SetFieldType(CFieldExchange::outputColumn); RFX_Text(pFX, _T("[Customer ID]"), m_Customer_ID); RFX_Text(pFX, _T("[Company Name]"), m_Company_Name); RFX_Text(pFX, _T("[Contact Name]"), m_Contact_Name); RFX_Text(pFX, _T("[Contact Title]"), m_Contact_Title); RFX_Text(pFX, _T("[Address]"), m_Address); RFX_Text(pFX, _T("[City]"), m_City); RFX_Text(pFX, _T("[Region]"), m_Region); RFX_Text(pFX, _T("[Postal Code]"), m_Postal_Code); RFX_Text(pFX, _T("[Country]"), m_Country); RFX_Text(pFX, _T("[Phone]"), m_Phone); RFX_Text(pFX, _T("[Fax]"), m_Fax); //}}AFX_FIELD_MAP } The GetDefaultConnect() function is used to return the default SQL connect string. GetDefaultConnect() takes no parameters and returns a CString reference. GetDefaultConnect() has the following prototype:
CString & GetDefaultConnect(); The default GetDefaultConnect() function created by AppWizard is shown in the following code fragment. This example is from the sample program shown later in this chapter (see the section called "An AppWizard-Generated Program"). It causes ODBC to display an open database dialog box.
vcg02.htm
CString CRecordViewSet::GetDefaultConnect() { return _T("ODBC;DSN=MS Access 7.0 Database"); } The GetDefaultSQL() function is used to return the default SQL string used to select records from the datasource to be placed in the recordset. GetDefaultSQL() takes no parameters and returns a CString reference. GetDefaultSQL() has the following prototype:
CString & GetDefaultSQL(); The default GetDefaultSQL() function created by AppWizard is shown in the following code fragment. This example is from the sample program shown later in this chapter.
CString CRecordViewSet::GetDefaultSQL() { return _T("[Customers]"); } The OnSetOptions() function is used to set options for the specified HSTMT. OnSetOptions() takes no parameters and has no return value. OnSetOptions() has the following prototype:
void OnSetOptions(HSTMT hstmt); An AppWizard-created application doesn't have a default OnSetOptions() function. If you need one, you must write it yourself. The OnWaitForDataSource() function is used to allow the application to perhaps ask the user (or simply query a control) whether there is a need to cancel the current asynchronous operation. If the user really wants to cancel, your OnWaitForDataSource() function should call the Cancel() function to end the asynchronous operation. OnWaitForDataSource() takes one parametera Boolean value that will be nonzero if the datasource is still waiting for an asynchronous operationand has no return value. OnWaitForDataSource() has the following prototype:
void OnWaitForDataSource(BOOL bStillWaiting); An AppWizard-created application doesn't have a default OnWaitForDataSource() function. If you need one, you must write it yourself. ClassWizard will add the shell of the OnWaitForDataSource() handler for you, which you then can fill in
vcg02.htm
as needed.
CRecordView
The CRecordView object is used to manage recordsets. This object is usually used with the CDatabase and CRecordset objects. The member functions in the CRecordView object offer a powerful set of database record manipulation tools. The CRecordView object is derived from the CFormView base class. Figure 2.4 shows the class hierarchy for the CRecordView class. Figure 2.4. The CRecordView class hierarchy. The CRecordView class object has a number of member functions. These functions are divided into three categories:
q
Construction: The constructor is the only construction member. Attributes: Three functions are used to obtain information about the recordset that the CRecordView object has been attached to. Operations: A single function, OnMove(), is provided to let the programmer change the CRecordView current record pointer.
The following sections take a closer look at the members of this class. This book doesn't cover the members of the other classes on which CRecordView is based. Refer to the Visual C++ documentation (either the manuals or the online help system) for full information about these classes.
Construction
There is one construction member function: CRecordView(). There also is the default destructor, which I won't document here because it's never called by an application. The following list describes the construction member function and gives an example of its use. The CRecordView() function is used to initialize the CRecordView object. CRecordView() takes one parameter: an identifier for the dialog box template. Because CRecordView() is a constructor, it has no defined return value. CRecordView() has the following prototype(s):
CRecordView(LPCSTR lpTemplateName); or
vcg02.htm
CRecordView(UINT nTemplateID); The following code fragment shows the default override constructor provided by AppWizard when a database application is created. This example is from the sample program shown later in this chapter.
CRecordViewView::CRecordViewView() : CRecordView(CRecordViewView::IDD) { //{{AFX_DATA_INIT(CRecordViewView) m_pSet = NULL; //}}AFX_DATA_INIT // TODO: add construction code here }
Attributes
Three member functions deal directly with CRecordView attributes: OnGetRecordset(), IsOnFirstRecord(), and IsOnLastRecord(). With these member functions, applications can obtain information about the record view. The OnGetRecordset() function is used to get the pointer to the default CRecordset object that is attached to this CRecordView. OnGetRecordset() takes no parameters and returns a CRecordset pointer. OnGetRecordset() has the following prototype:
CRecordset * OnGetRecordset(); The following code fragment shows the default OnGetRecordset() provided by AppWizard when a database application is created. In this example, m_pSet was initialized in the constructor. This example is from the sample program shown later in this chapter.
vcg02.htm
} The IsOnFirstRecord() function is used to tell the view that the current record is the first record. This is necessary to allow the user interface to enable/disable the interface for moving to previous records. IsOnFirstRecord() takes no parameters and returns a nonzero value when the current record is the first record. IsOnFirstRecord() has the following prototype:
BOOL IsOnFirstRecord(); An AppWizard-created application doesn't have a default IsOnFirstRecord() function. If you want to provide special processing in your IsOnFirstRecord() handler, you must write it yourself. ClassWizard won't create a shell IsOnFirstRecord() handler for you. The default IsOnFirstRecord() function is shown in the following code fragment:
BOOL CRecordView::IsOnFirstRecord() { ASSERT_VALID(this); CRecordsetStatus status; OnGetRecordset()->GetStatus(status); return status.m_lCurrentRecord == 0; } The IsOnLastRecord() function is used to tell the view that the current record is the last record. This is necessary to allow the user interface to enable/disable the interface for moving to later records. IsOnLastRecord() takes no parameters and returns a nonzero value when the current record is the last record. If IsOnLastRecord() is unable to determine whether the current record is the last record, it returns zero. IsOnLastRecord() has the following prototype:
BOOL IsOnLastRecord(); An AppWizard-created application doesn't have a default IsOnLastRecord() function. If you want to provide special processing in your IsOnLastRecord() handler, you must write it yourself. ClassWizard won't create a shell IsOnLastRecord() handler for you. The default IsOnLastRecord() function is shown in the following code fragment:
vcg02.htm
Operations
One member function deals directly with CRecordView operations. With this member function, applications can control which actions take place when the current record pointer is changed. The OnMove() function is used to let the programmer change the current record pointer. Despite the name of this function, it's not normally overridden by an application. OnMove() takes one parameter and returns a nonzero value if the record pointer was successfully moved and a zero value if the call failed. OnMove() has the following prototype:
BOOL OnMove(UINT nMoveCommand); The nMoveCommand parameter must be one of the manifest values shown in Table 2.1.
ID_RECORD_FIRST Moves to the first record in the recordset. ID_RECORD_NEXT Moves to the next record in the recordset, provided that the current record isn't the last record in the recordset. ID_RECORD_LAST Moves to the last record in the recordset. ID_RECORD_PREV Moves the previous record in the recordset, provided that the current record isn't the first record in the recordset. WARNING Be careful not to call OnMove() on a recordset that has no records.
CFieldExchange
vcg02.htm
The CFieldExchange class is used to support the record field exchange used by the other database classes. The DoFieldExchange() function has a CFieldExchange pointer passed to it. The CFieldExchange class object is used to encapsulate the exchange of data between records in a recordset and variables in the application that hold the column data. The CFieldExchange object is a base class that isn't derived from any other MFC class. Figure 2.5 shows the class hierarchy for the CFieldExchange class. Figure 2.5. The CFieldExchange class hierarchy. The CFieldExchange class object has two public member functions: IsFieldType() and SetFieldType(). These functions aren't divided into categories. The IsFieldType() function is used to determine if the current operation (transfer) can be performed on the current field. IsFieldType() takes one parameterpnField, a pointer to an index to the fieldand returns a nonzero value if the operation can be performed. IsFieldType() has the following prototype:
BOOL IsFieldType(UINT * pnField); The IsFieldType() function is useful when you write your own RFX functions. An example of an RFX function is shown in the following code fragment. This code is from the DBRFX.CPP file. The call to IsFieldType() appears in bold.
void AFXAPI RFX_Int(CFieldExchange* pFX, LPCTSTR szName, int& value) { ASSERT(AfxIsValidAddress(pFX, sizeof(CFieldExchange))); ASSERT(AfxIsValidString(szName)); UINT nField; if (!pFX->IsFieldType(&nField)) return; LONG* plLength = pFX->m_prs->GetFieldLength(pFX); switch (pFX->m_nOperation) { case CFieldExchange::BindFieldToColumn: { #ifdef _DEBUG int nSqlType = pFX->GetColumnType(nField);
file:///H|/0-672-30913-0/vcg02.htm (35 of 54) [14/02/2003 03:05:41 ]
vcg02.htm
if (nSqlType != SQL_C_SHORT) { // Warn of possible field schema mismatch if (afxTraceFlags & traceDatabase) TRACE1("Warning: int converted from SQL type %ld.\n", nSqlType); } #endif } // fall through default: LDefault: pFX->Default(szName, &value, plLength, SQL_C_LONG, sizeof(value), 5); return; case CFieldExchange::Fixup: if (*plLength == SQL_NULL_DATA) { pFX->m_prs->SetFieldFlags(nField, AFX_SQL_FIELD_FLAG_NULL, pFX->m_nFieldType); value = AFX_RFX_INT_PSEUDO_NULL; } return; case CFieldExchange::SetFieldNull: if ((pFX->m_pvField == NULL && pFX->m_nFieldType == CFieldExchange::outputColumn) || pFX->m_pvField == &value)
vcg02.htm
{ if (pFX->m_bField) { // Mark fields null pFX->m_prs->SetFieldFlags(nField, AFX_SQL_FIELD_FLAG_NULL, pFX->m_nFieldType); value = AFX_RFX_INT_PSEUDO_NULL; *plLength = SQL_NULL_DATA; } else { pFX->m_prs->ClearFieldFlags(nField, AFX_SQL_FIELD_FLAG_NULL, pFX->m_nFieldType); *plLength = sizeof(value); } #ifdef _DEBUG pFX->m_bFieldFound = TRUE; #endif } return; case CFieldExchange::MarkForAddNew: // Can force writing of psuedo-null value (as a non-null) by // setting field dirty if (!pFX->m_prs->IsFieldFlagDirty(nField, pFX->m_nFieldType)) { if (value != AFX_RFX_INT_PSEUDO_NULL) { pFX->m_prs->SetFieldFlags(nField,
file:///H|/0-672-30913-0/vcg02.htm (37 of 54) [14/02/2003 03:05:41 ]
vcg02.htm
AFX_SQL_FIELD_FLAG_DIRTY, pFX->m_nFieldType); pFX->m_prs->ClearFieldFlags(nField, AFX_SQL_FIELD_FLAG_NULL, pFX->m_nFieldType); } } return; case CFieldExchange::MarkForUpdate: if (value != AFX_RFX_INT_PSEUDO_NULL) pFX->m_prs->ClearFieldFlags(nField, AFX_SQL_FIELD_FLAG_NULL, pFX->m_nFieldType); goto LDefault; case CFieldExchange::GetFieldInfoValue: if (pFX->m_pfi->pv == &value) { pFX->m_pfi->nField = nField-1; goto LFieldFound; } return; case CFieldExchange::GetFieldInfoOrdinal: if (nField-1 == pFX->m_pfi->nField) { LFieldFound: pFX->m_pfi->nDataType = AFX_RFX_INT; pFX->m_pfi->strName = szName; pFX->m_pfi->pv = &value; pFX->m_pfi->dwSize = sizeof(value); // Make sure field found only once
vcg02.htm
ASSERT(pFX->m_bFieldFound == FALSE); pFX->m_bFieldFound = TRUE; } return; #ifdef _DEBUG case CFieldExchange::DumpField: *pFX->m_pdcDump << "\n" << szName << " = " << value; return; #endif //_DEBUG } } The SetFieldType() function is used to set the field types prior to calls to the RFX functions. SetFieldType() takes one parameternFieldType, an enum that is declared in CFieldExchange (see Table 2.2). SetFieldType() has no return value and has the following prototype:
void SetFieldType(UINT nFieldType); The valid nFieldTypes are listed in Table 2.2.
For example, the default DoFieldExchange() function that AppWizard includes in a database application calls SetFieldType() as its first function call, with a parameter of CFieldExchange::outputColumn:
vcg02.htm
RFX_Text(pFX, _T("[Customer ID]"), m_Customer_ID); RFX_Text(pFX, _T("[Company Name]"), m_Company_Name); RFX_Text(pFX, _T("[Contact Name]"), m_Contact_Name); RFX_Text(pFX, _T("[Contact Title]"), m_Contact_Title); RFX_Text(pFX, _T("[Address]"), m_Address); RFX_Text(pFX, _T("[City]"), m_City); RFX_Text(pFX, _T("[Region]"), m_Region); RFX_Text(pFX, _T("[Postal Code]"), m_Postal_Code); RFX_Text(pFX, _T("[Country]"), m_Country); RFX_Text(pFX, _T("[Phone]"), m_Phone); RFX_Text(pFX, _T("[Fax]"), m_Fax); //}}AFX_FIELD_MAP }
CDBException
The CDBException class is used to handle error conditions that occur when a number of the database class's member functions encounter problems. The CDBException class object is used to encapsulate the error condition. The CDBException object is derived from the CException class, which in turn is derived from the CObject class. Figure 2.6 shows the class hierarchy for the CDBException class. Figure 2.6. The CDBException class hierarchy. The CDBException class object has three public member functions, which are all in the data members category. The following sections take a closer look at the members of this class. This book doesn't cover the members of the other classes on which CDBException is based. You should refer to the Visual C++ documentation (either the manuals or the online help system) for full information about these classes.
Data Members
vcg02.htm
There are three data members in the CDBException object class: m_nRetCode, m_strError, and m_strStateNativeOrigin. The m_nRetCode member variable is used to hold the return code that has the error code. Valid return codes are shown in Table 2.3. The file \MSDEV\MFC\include\AFXDB.RC documents these values using string resource definitions.
Table 2.3. m_nRetCode values. Identifier Description All Versions of MFC AFX_SQL_ERROR_API_CONFORMANCE A CDatabase::Open() call was made, and the driver doesn't conform to the required ODBC API conformance level. The datasource connection failed. A NULL CDatabase pointer was passed to the CRecordset constructor, and a subsequent attempt to create a connection based on a call to GetDefaultConnect() failed. More data was requested than would fit in the storage you provided. See the nMaxLength argument for the RFX_Text() and RFX_Binary() functions for information on expanding the space available. The call to CRecordset::Open() that requested a dynaset failed. This was due to the fact that dynasets weren't supported by this ODBC driver. An attempt was made to open a table. However, no columns were identified in record field exchange (RFX) function calls in your DoFieldExchange() function. Your call to an RFX function in your DoFieldExchange() function wasn't compatible with the column data type in the recordset. A call was made to CRecordset::Update() without having previously called CRecordset::AddNew() or CRecordset::Edit(). A request to lock records for update couldn't be fulfilled because the ODBC driver being used doesn't support locking. A call was made to CRecordset::Update() or CRecordset::Delete() for a table with no unique key, and multiple records were changed.
AFX_SQL_ERROR_CONNECT_FAIL
AFX_SQL_ERROR_DATA_TRUNCATED
AFX_SQL_ERROR_DYNASET_NOT_SUPPORTED
AFX_SQL_ERROR_EMPTY_COLUMN_LIST
AFX_SQL_ERROR_FIELD_SCHEMA_MISMATCH
AFX_SQL_ERROR_ILLEGAL_MODE
AFX_SQL_ERROR_LOCK_MODE_NOT_SUPPORTED
AFX_SQL_ERROR_MULTIPLE_ROWS_AFFECTED
vcg02.htm
AFX_SQL_ERROR_NO_CURRENT_RECORD
Your application has attempted to edit or delete a previously deleted record. The application must scroll to a different (nondeleted) record after deleting the current record. The application's request for a dynaset couldn't be fulfilled because the ODBC driver doesn't support positioned updates. A call was made to CRecordset::Update() or CRecordset::Delete(), but when the operation began, the record couldn't be found anymore. The attempt to load ODBC.DLL failed. Windows couldn't find or couldn't load the ODBC.DLL. This error is fatal, and your program must end. The application's request for a dynaset couldn't be fulfilled because a Level 2-compliant ODBC driver is required, and the current ODBC driver isn't Level 2compliant. The attempt to scroll was unsuccessful because the datasource doesn't support backward scrolling. The application made a call to CRecordset::Open() requesting a snapshot, but the call failed. Snapshots aren't supported by the driver. This will occur only when the ODBC cursor library, ODBCCURS.DLL, can't be found. A call to CDatabase::Open() was made, and the driver doesn't conform to the required minimum ODBC SQL conformance level. It wasn't possible to specify the total size of a CLongBinary data value. This most likely happened because a global memory block couldn't be preallocated. An attempt was made to update a recordset that was opened in read-only mode, or the datasource is readonly. The function failed. The error message returned by ::SQLError() is stored in the m_strError data member. A handle (either environment, connection, or statement) was invalid. This was caused by a programmer error. This error isn't reported in MFC version 4. MFC Version 4 Only
AFX_SQL_ERROR_NO_POSITIONED_UPDATES
AFX_SQL_ERROR_NO_ROWS_AFFECTED
AFX_SQL_ERROR_ODBC_LOAD_FAILED
AFX_SQL_ERROR_ODBC_V2_REQUIRED
AFX_SQL_ERROR_RECORDSET_FORWARD_ONLY AFX_SQL_ERROR_SNAPSHOT_NOT_SUPPORTED
AFX_SQL_ERROR_SQL_CONFORMANCE
AFX_SQL_ERROR_SQL_NO_TOTAL
AFX_SQL_ERROR_RECORDSET_READONLY
SQL_ERROR SQL_INVALID_HANDLE
AFX_SQL_ERROR_INCORRECT_ODBC
file:///H|/0-672-30913-0/vcg02.htm (42 of 54) [14/02/2003 03:05:41 ]
vcg02.htm
AFX_SQL_ERROR_DYNAMIC_CURSOR_NOT_SUPPORT This ODBC driver doesn't support dynamic cursors. AFX_SQL_ERROR_NO_DATA_FOUND AFX_SQL_ERROR_ROW_FETCH AFX_SQL_ERROR_ROW_UPDATE_NOT_SUPPORTED AFX_SQL_ERROR_UPDATE_DELETE_FAILED The application attempted to move before the first record, after the last record. There was an attempt to fetch a row from the server during an Open or Requery operation. The ODBC driver doesn't support dynasets. A call to SQLSetPos() returned SQL_SUCCESS_WITH_INFO explaining why the function call to CRecordset::ExecuteSetPosUpdate() failed.
When you're writing an application, it's important that all error trapping be implemented. It's unacceptable for an application to fail because of an unhandled exception condition. The m_strError member variable is used to hold a string that contains the text of the error message. The string is in the format State %s, Native %ld, Origin %s. The State value is a five-character string containing the SQL error code. The Native error code is specific to the datasource. The Origin string is error message text returned by the ODBC component generating the error condition. The m_strStateNativeOrigin member variable contains the error condition formatted as State %s, Native %ld, Origin %s. The State value is a five-character string containing the SQL error code. The Native error code is specific to the datasource. The Origin string is error message text returned by the ODBC component generating the error condition.
CLongBinary
The CLongBinary class is used to hold large binary objects contained in databases. These objects are often referred to as BLOBs (binary large objects). Typical BLOBs are bitmap images, audio or video tracks, and specialized binary data. The CLongBinary class object is used to encapsulate the error condition. The CLongBinary object is derived from the CObject class. Figure 2.7 shows the class hierarchy for the CLongBinary class. Figure 2.7. The CLongBinary class hierarchy. The CLongBinary class object has three public member functions that can be divided into two categories:
q
Data members: The data members of the CLongBinary class hold both a handle to the object and the object's length in bytes. Construction: The constructor for the CLongBinary class.
The following sections take a closer look at the members of this class. This book doesn't cover the members of the other classes on which CLongBinary is based. Refer to the Visual C++ documentation (either the manuals or the online help system) for full information about these classes.
file:///H|/0-672-30913-0/vcg02.htm (43 of 54) [14/02/2003 03:05:41 ]
vcg02.htm
Data Members
There are two data members in the CLongBinary object class: m_dwDataLength and m_hData.
q
The m_dwDataLength member variable contains the real size of the object that is stored in the block of memory specified by the m_hData handle. The actual amount of storage allocated may exceed this value. The m_hData member variable holds the Windows handle to the memory block that will contain the BLOB. You can determine the size of this memory block (which must be larger than the BLOB) by using a call to ::GlobalSize().
Construction Member
The CLongBinary object class has a single constructor. The CLongBinary() function is used to construct the CLongBinary object. CLongBinary() takes no parameters. Because it's a constructor, no return value is specified. CLongBinary() has the following prototype:
CLongBinary(); The CLongBinary class is used with the RFX_LongBinary() field exchange function.
RFX Functions
RFX functions transfer data to and from the application's variables to a record's column. These functions are placed in a DoFieldExchange() function. The following code fragment shows an example of a DoFieldExchange() function.
vcg02.htm
RFX_Text(pFX, _T("[Customer ID]"), m_Customer_ID); RFX_Text(pFX, _T("[Company Name]"), m_Company_Name); RFX_Text(pFX, _T("[Contact Name]"), m_Contact_Name); RFX_Text(pFX, _T("[Contact Title]"), m_Contact_Title); RFX_Text(pFX, _T("[Address]"), m_Address); RFX_Text(pFX, _T("[City]"), m_City); RFX_Text(pFX, _T("[Region]"), m_Region); RFX_Text(pFX, _T("[Postal Code]"), m_Postal_Code); RFX_Text(pFX, _T("[Country]"), m_Country); RFX_Text(pFX, _T("[Phone]"), m_Phone); RFX_Text(pFX, _T("[Fax]"), m_Fax); //}}AFX_FIELD_MAP } Each of the RFX_...() functions allows transfer of a different type of data to and from the record's columns. Table 2.4 lists each RFX_...() function and describes its data types.
Table 2.4. The RFX_...() functions. Function RFX_Bool() RFX_Byte() RFX_Binary() RFX_Double() RFX_Single() RFX_Int() RFX_Long() Data Type BOOL BYTE CByteArray double float int long Description Transfers a Boolean (TRUE/FALSE) value. Transfers a byte (unsigned character) value. Transfers an array of byte values to the specified CByteArray object. Transfers a floating-point (double) value. Transfers a floating-point (float) value. Transfers an integer (unsigned) value. Transfers a long integer (unsigned long) value.
RFX_LongBinary() CLongBinary Transfers an array of byte values to the specified CLongBinary object. RFX_Text() RFX_Date() CString CTime Transfers a character string (CString) value. Transfers a time value to a CTime object.
If there is no RFX...() function to transfer the type of data you need, you can create your own RFX...() functions. You can
vcg02.htm
use the existing RFX...() functions in the DBRFX.CPP file as starting points.
An AppWizard-Generated Program
This chapter's example is an AppWizard (Visual C++ 4) database program that uses ODBC. The final functionality (the dialog box controls for the main window, as well as connections between the controls in the dialog box and the program's variables) was done with the Visual C++ 4 IDE and ClassWizard. The actual time it took to develop this application was only a few minutes.
NOTE Windows 95, Windows NT, and Visual C++ 4 all now support long filenames. This lets the names of the class implementation files have meaningful names. You should give your projects meaningful names because you're no longer limited to project names that had to be only a few characters to fit into DOS's 8.3 filename structure.
This program uses the CDatabase class, the CRecordset class, and the CRecordView class. This final part of the chapter takes a look at the files that support the CRecordView class (the Record ViewView.cpp file) and the CRecordset class (the Record ViewSet.cpp file).
CRecordView Support
The CRecordView class is supported in the RecordView View.cpp file. The minimal support is sufficient to create a working application that can be easily turned into a working record browser with editing capabilities. The default implementation of the AppWizard-produced program doesn't support adding records to the recordset, but you can add this functionality easily. First, the RecordView View.cpp file, shown in Listing 2.1, contains the constructor and destructor for our CRecordView object, which is called CRecordViewView. The next function is DoDataExchange(), which transfers the fields in the recordset to the application's data variables. This is the first time I've mentioned the DoDataExchange() function. Its purpose is to transfer data to and from the application's variables to the main window's dialog box controls.
NOTE Don't confuse the DoDataExchange() function, which transfers data between an application's variables and dialog box controls, with DoFieldExchange(), which transfers data from the same variables and the current record in the recordset.
file:///H|/0-672-30913-0/vcg02.htm (46 of 54) [14/02/2003 03:05:41 ]
vcg02.htm
The Record ViewView.cpp file also contains functions to assist the programmer in implementing printer support (including print preview) and diagnostic support. Listing 2.1. The CRecordView handler: Record ViewView.cpp.
// Record ViewView.cpp : implementation of the CRecordViewView class // #include "stdafx.h" #include "Record View.h" #include "Record ViewSet.h" #include "Record ViewDoc.h" #include "Record ViewView.h" #ifdef _DEBUG #define new DEBUG_NEW #undef THIS_FILE static char THIS_FILE[] = __FILE__; #endif ///////////////////////////////////////////////////////////////////////////// // CRecordViewView IMPLEMENT_DYNCREATE(CRecordViewView, CRecordView) BEGIN_MESSAGE_MAP(CRecordViewView, CRecordView) //{{AFX_MSG_MAP(CRecordViewView) // NOTE - the ClassWizard will add and remove mapping macros here. // DO NOT EDIT what you see in these blocks of generated code!
vcg02.htm
ON_COMMAND(ID_FILE_PRINT_PREVIEW, CRecordView::OnFilePrintPreview) END_MESSAGE_MAP() ///////////////////////////////////////////////////////////////////////////// // CRecordViewView construction/destruction CRecordViewView::CRecordViewView() : CRecordView(CRecordViewView::IDD) { //{{AFX_DATA_INIT(CRecordViewView) m_pSet = NULL; //}}AFX_DATA_INIT // TODO: add construction code here } CRecordViewView::~CRecordViewView() { } void CRecordViewView::DoDataExchange(CDataExchange* pDX) { CRecordView::DoDataExchange(pDX); //{{AFX_DATA_MAP(CRecordViewView) DDX_FieldText(pDX, IDC_ADDRESS, m_pSet->m_Address, m_pSet); DDX_FieldText(pDX, IDC_CITY, m_pSet->m_City, m_pSet); DDX_FieldText(pDX, IDC_COMPANY_NAME, m_pSet->m_Company_Name, m_pSet); DDX_FieldText(pDX, IDC_CUSTOMER_ID, m_pSet->m_Customer_ID, m_pSet); DDX_FieldText(pDX, IDC_FAX, m_pSet->m_Fax, m_pSet); DDX_FieldText(pDX, IDC_PHONE, m_pSet->m_Phone, m_pSet); DDX_FieldText(pDX, IDC_POSTAL_CODE, m_pSet->m_Postal_Code, m_pSet); DDX_FieldText(pDX, IDC_REGION, m_pSet->m_Region, m_pSet); //}}AFX_DATA_MAP
file:///H|/0-672-30913-0/vcg02.htm (48 of 54) [14/02/2003 03:05:41 ]
vcg02.htm
} BOOL CRecordViewView::PreCreateWindow(CREATESTRUCT& cs) { // TODO: Modify the Window class or styles here by modifying // the CREATESTRUCT cs return CRecordView::PreCreateWindow(cs); } void CRecordViewView::OnInitialUpdate() { m_pSet = &GetDocument()->m_recordViewSet; CRecordView::OnInitialUpdate(); } ///////////////////////////////////////////////////////////////////////////// // CRecordViewView printing BOOL CRecordViewView::OnPreparePrinting(CPrintInfo* pInfo) { // Default preparation return DoPreparePrinting(pInfo); } void CRecordViewView::OnBeginPrinting(CDC* /*pDC*/, CPrintInfo* /*pInfo*/) { // TODO: add extra initialization before printing } void CRecordViewView::OnEndPrinting(CDC* /*pDC*/, CPrintInfo* /*pInfo*/) { // TODO: add cleanup after printing }
vcg02.htm
///////////////////////////////////////////////////////////////////////////// // CRecordViewView diagnostics #ifdef _DEBUG void CRecordViewView::AssertValid() const { CRecordView::AssertValid(); } void CRecordViewView::Dump(CDumpContext& dc) const { CRecordView::Dump(dc); } CRecordViewDoc* CRecordViewView::GetDocument() // Non-debug version is inline { ASSERT(m_pDocument->IsKindOf(RUNTIME_CLASS(CRecordViewDoc))); return (CRecordViewDoc*)m_pDocument; } #endif //_DEBUG ///////////////////////////////////////////////////////////////////////////// // CRecordViewView database support CRecordset* CRecordViewView::OnGetRecordset() { return m_pSet; } ///////////////////////////////////////////////////////////////////////////// // CRecordViewView message handlers When you use ClassWizard to add message handlers, these functions will be added to the end of the RecordView View.cpp file.
vcg02.htm
CRecordset Support
The CRecordset class is supported in the Record ViewSet.cpp file. The minimal support is sufficient to create a working application that easily can be turned into a working record browser with edit capabilities. First in the Record ViewSet.cpp file, shown in Listing 2.2, is the constructor for the CRecordset object. There is no default destructor, but you could provide one if it were needed. After the constructor are the GetDefaultConnect() and GetDefaultSQL() functions. The final function in the Record ViewSet.cpp file is the DoFieldExchange() function. This function manages the transfer of data to and from the application's variables and the recordset's current record. Listing 2.2. The CRecordset support file Record ViewSet.cpp.
// Record ViewSet.cpp : implementation of the CRecordViewSet class // #include "stdafx.h" #include "Record View.h" #include "Record ViewSet.h" #ifdef _DEBUG #define new DEBUG_NEW #undef THIS_FILE static char THIS_FILE[] = __FILE__; #endif ///////////////////////////////////////////////////////////////////////////// // CRecordViewSet implementation IMPLEMENT_DYNAMIC(CRecordViewSet, CRecordset) CRecordViewSet::CRecordViewSet(CDatabase* pdb) : CRecordset(pdb) { //{{AFX_FIELD_INIT(CRecordViewSet)
vcg02.htm
m_Customer_ID = _T(""); m_Company_Name = _T(""); m_Contact_Name = _T(""); m_Contact_Title = _T(""); m_Address = _T(""); m_City = _T(""); m_Region = _T(""); m_Postal_Code = _T(""); m_Country = _T(""); m_Phone = _T(""); m_Fax = _T(""); m_nFields = 11; //}}AFX_FIELD_INIT m_nDefaultType = snapshot; } CString CRecordViewSet::GetDefaultConnect() { return _T("ODBC;DSN=MS Access 7.0 Database"); } CString CRecordViewSet::GetDefaultSQL() { return _T("[Customers]"); } void CRecordViewSet::DoFieldExchange(CFieldExchange* pFX) { //{{AFX_FIELD_MAP(CRecordViewSet) pFX->SetFieldType(CFieldExchange::outputColumn); RFX_Text(pFX, _T("[Customer ID]"), m_Customer_ID);
file:///H|/0-672-30913-0/vcg02.htm (52 of 54) [14/02/2003 03:05:41 ]
vcg02.htm
RFX_Text(pFX, _T("[Company Name]"), m_Company_Name); RFX_Text(pFX, _T("[Contact Name]"), m_Contact_Name); RFX_Text(pFX, _T("[Contact Title]"), m_Contact_Title); RFX_Text(pFX, _T("[Address]"), m_Address); RFX_Text(pFX, _T("[City]"), m_City); RFX_Text(pFX, _T("[Region]"), m_Region); RFX_Text(pFX, _T("[Postal Code]"), m_Postal_Code); RFX_Text(pFX, _T("[Country]"), m_Country); RFX_Text(pFX, _T("[Phone]"), m_Phone); RFX_Text(pFX, _T("[Fax]"), m_Fax); //}}AFX_FIELD_MAP } ///////////////////////////////////////////////////////////////////////////// // CRecordViewSet diagnostics #ifdef _DEBUG void CRecordViewSet::AssertValid() const { CRecordset::AssertValid(); } void CRecordViewSet::Dump(CDumpContext& dc) const { CRecordset::Dump(dc); } #endif //_DEBUG }
Summary
file:///H|/0-672-30913-0/vcg02.htm (53 of 54) [14/02/2003 03:05:41 ]
vcg02.htm
You need a thorough understanding of the member objects that constitute Visual C++'s data access object class to develop commercial-quality Visual C++ database applications. This chapter began by showing each of the data access object classes, with a detailed explanation of the member functions of each data-related member object. The Record View sample application introduced you to the code that AppWizard creates when you use AppWizard to create a basic database application. This chapter also can serve as a reference for the functions and member variables of the data access classes. Chapter 3, "Using Visual C++ Data Access Functions," completes Part I of this book by showing you how to use Visual C++'s native C database SQL...() functions. The SQL...() functions can be used both with the MFC data access classes or alone, without any of the data access classes. These functions are useful when your application must have greater control over the process of database access.
vcg03.htm
The SQL...() Functions s SQLAllocConnect() s SQLAllocEnv() s SQLAllocStmt() s SQLBindCol() s SQLBindParameter() s SQLBrowseConnect() s SQLCancel() s SQLColAttributes() s SQLColumnPrivileges() s SQLColumns() s SQLConnect() s SQLDataSources() s SQLDescribeCol() s SQLDescribeParam() s SQLDisconnect() s SQLDriverConnect() s SQLDrivers() s SQLError() s SQLExecDirect() s SQLExecute() s SQLExtendedFetch() s SQLFetch() s SQLForeignKeys() s SQLFreeConnect() s SQLFreeEnv() s SQLFreeStmt() s SQLGetConnectOption() s SQLGetCursorName() s SQLGetData() s SQLGetFunctions() s SQLGetInfo() s SQLGetStmtOption() s SQLGetTypeInfo() s SQLMoreResults() s SQLNativeSql() s SQLNumParams() s SQLNumResultCols() s SQLParamData()
vcg03.htm
SQLParamOptions() s SQLPrepare() s SQLPrimaryKeys() s SQLProcedureColumns() s SQLProcedures() s SQLPutData() s SQLRowCount() s SQLSetConnectOption() s SQLSetCursorName() s SQLSetPos() s SQLSetScrollOptions() s SQLSetStmtOption() s SQLSpecialColumns() s SQLStatistics() s SQLTablePrivileges() s SQLTables() s SQLTransact() Using the SQL...() Functions s Using a Datasource s Handling Errors in SQL...() Statements s Quoting Names s Getting the Datasource from the User Summary
s
vcg03.htm
Not withstanding the existing functionality found in the MFC ODBC classes, there is no reason why you can't use both the MFC classes and the SQL...() functions in the same code. The MFC ODBC classes have all the necessary handles to allow usage of the SQL...() functions; in fact, the MFC ODBC classes use the SQL...() functions to perform most of their database manipulation tasks.
NOTE The SQL...() functions haven't changed substantially since their introduction. Minor changes to accommodate 32-bit programming (the original SQL...() functions were 16-bit) account for virtually all the changes found.
When you use the SQL...() functions, remember that there is a fixed order of usage. The sample code in the second part of this chapter provides more information about how to use the functions. Each function is presented with a prototype, function arguments (if any), an explanation of the return value (if any), a description of what the function should do, and an explanation of possible failures that might occur when you use the function. Some functions will fail for a number of reasons. When you encounter a failure, use SQLError() to determine the cause of the failure.
NOTE Using the SQL logging facility is often very useful in determining why an SQL function failed during development. Of course, you shouldn't expect your application's users to have SQL logging turned on. Don't forget that SQL logging will significantly affect ODBC performance because detailed information is written to the logging file for each and every SQL operation. Don't turn on SQL logging indiscriminately; use it when you need it and then turn it off.
In all cases where the return value is shown as type RETCODE, you should create a variable defined as this type to hold the return code. You can then use either an if() or a switch() statement to check the return code for errors.
vcg03.htm
NOTE There are two versions of the ODBC SDK 2.1: 2.10a and 2.10b. You should use 2.10b, which was released in August of 1995, if you're using Visual C++ 2.x. The version of ODBC that is included with Visual C++ 4.0 is 2.5. Version 2.5 is intended for use on both Windows 95 and on Windows NT versions 3.5 and 3.51.
NOTE The current version of the ODBC SDK is 2.5, which is included with both Visual C++ 4 and Visual C++ 1.5x. Microsoft hasn't announced whether (or when) further updates to the ODBC SDK will occur. It can be assumed that new versions of Visual C++ may well include new versions of ODBC.
SQLAllocConnect()
Prototype:
HENV henv
HDBC FAR * phdbc A pointer to the storage for the connection handle. Return Value: This function will return one of the following values:
SQL_SUCCESS
vcg03.htm
SQL_ERROR SLQ_INVALID_HANDLE
The function failed. Call SQLError() to get more information about the specific failure. The function failed. The handle that was passed wasn't a valid handle. Possibly, the function that created the handle had failed and didn't return a valid handle.
If this function fails, your SQL function should end or the error should be corrected; the function should then be reexecuted. Usage: The SQLAllocConnect() function is used to allocate the connection between the application and the datasource. It's called after a call to SQLAllocEnv(), and SQLAllocStmt() is called after a call to SQLAllocEnv(). You must call the SQLAllocConnect() function after getting an HENV handle from SQLAllocEnv(). Without a valid HENV handle, this function won't succeed. Always check the return code from this function for errors. Notes: The function's results are placed in the handle pointed to by the phdbc parameter. If the SQLAllocConnect() function fails, the phdbc handle will be set to SQL_NULL_HDBC.
SQLAllocEnv()
Prototype:
HENV FAR * phenv A pointer to an environment handle. \Return Value: This function will return one of the following values:
SQL_SUCCESS The function was successful. SQL_ERROR The function failed. Call SQLError() to get more information about the specific failure.
vcg03.htm
If this function fails, your SQL function should end or the error should be corrected; the function should then be reexecuted. Often, this function is the first SQL...() function that is called. If it fails, there may be no recovery. Usage: Call the SQLAllocEnv() function to initialize the SQL environment. You should make a matching SQLFreeEnv() call when your calling function has finished accessing the datasource. Always check the return code from these functions for errors. Notes: This function places the HENV in the supplied handle. If SQLAllocEnv() fails, the resultant phenv parameter is set to SQL_NULL_HENV.
SQLAllocStmt()
Prototype:
HDBC hdbc
HSTMT FAR * hstmt A pointer to a statement handle that will be filled in by this function. Return Value: This function will return one of the following values:
SQL_SUCCESS
SQL_SUCCESS_WITH_INFO The function was successful and more information is available. SQL_ERROR SLQ_INVALID_HANDLE The function failed. Call SQLError() to get more information about the specific failure. The function failed. The handle that was passed wasn't a valid handle. Possibly, the function that created the handle had failed and didn't return a valid handle.
vcg03.htm
If this function fails, your SQL function should end or the error should be corrected; the function should then be reexecuted. Usage: The SQLAllocStmt() function is used to allocate a statement handle. This statement handle is associated with the datasource to which the HDBC handle was connected. Notes: If this function fails, the returned HSTMT handle will be set to SQL_NULL_HSTMT.
SQLBindCol()
Prototype:
RETCODE SQLBindCol(HSTMT hstmt, UWORD icol, SWORD fCType, PTR rbgValue, SDWORD cbValueMax, SDWORD FAR * pcbValue) Parameters:
HSTMT hstmt UWORD icol SWORD fCType PTR rgbValue SDWORD cbValueMax
A statement handle returned by the call to SQLAllocStmt(). The index to the column in the table to which the variable is being bound. The data type of the data variable that is being bound to column icol. A pointer to the location in the application where the column's data is to be stored. The data type of rgbValue should be defined by fCType. The number of bytes in the storage location pointed to by rgbValue. Usually, the C sizeof() operator can be used for this parameter.
SDWORD FAR * pcbValue A pointer to an SDWORD variable that will receive the count of how many bytes in rgbValue were used. Return Value: This function will return one of the following values:
SQL_SUCCESS
vcg03.htm
SQL_SUCCESS_WITH_INFO The function was successful, and more information is available. SQL_ERROR SLQ_INVALID_HANDLE The function failed. Call SQLError() to get more information about the specific failure. The function failed. The handle that was passed wasn't a valid handle. Possibly, the function that created the handle had failed and didn't return a valid handle.
If this function fails, your SQL function should end or the error should be corrected; the function should then be reexecuted. Usage: Call the SQLBindCol() function only for the columns in a table that you need. You don't have to bind to every column in a table. Columns that don't have a variable bound to them will be discarded without error. You make a callusually in a loopto SQLFetch() or SQLExtendedFetch() to actually get the data from a record. Always check the return code from this function for errors. Notes: If this function fails, use the SQLError() function to find out why. When a column hasn't been bound and later must be accessed, use SQLGetData().
SQLBindParameter()
Prototype:
RETCODE SQLBindParameter(HSTMT hstmt, UWORD ipar, SWORD fParamType, SWORD fCType, SWORD fSqlType, UDWORD cbColDef, SWORD ibScale, PTR rgbValue, SDWORD cbValueMax, SDWORD pcbValue) Parameters:
A statement handle returned by the call to SQLAllocStmt(). The parameter number, which is one-based (not zero-based) from left to right. Parameter ipar's type.
vcg03.htm
SWORD fCType SWORD fSqlType UDWORD cbColDef SWORD ibScale PTR rgbValue SDWORD cbValueMax
The C data type of the parameter. The SQL data type of the parameter. The column's precision. The column's scale. A pointer to the location in the application where the column's data is to be stored. The data type of rgbValue should be defined by fCType. The number of bytes in the storage location pointed to by rgbValue. Usually the C sizeof() operator can be used for this parameter.
SDWORD FAR * pcbValue A pointer to an SDWORD variable that will receive the count of how many bytes in rgbValue were used. Return Value: This function will return one of the following values:
SQL_SUCCESS
SQL_SUCCESS_WITH_INFO The function was successful, and more information is available. SQL_ERROR SLQ_INVALID_HANDLE The function failed. Call SQLError() to get more information about the specific failure. The function failed. The handle that was passed wasn't a valid handle. Possibly, the function that created the handle had failed and didn't return a valid handle.
If this function fails, your SQL function should end or the error should be corrected; the function should then be reexecuted. Usage: Call the SQLBindParameter() function to bind a buffer to a parameter marker in an SQL statement. Always check the return code from this function for errors. Notes: If this function fails, use the SQLError() function to find out why. This function replaces the SQLSetParam() function found in ODBC version 1.x.
SQLBrowseConnect()
vcg03.htm
Prototype:
RETCODE SQLBrowseConnect(HDBC hdbc, UCHAR FAR * szConnStrIn, SWORD cbConnStrIn, UCHAR FAR * szConnStrOut, SWORD cbConnStrOutMax, SWORD FAR * pcbConnStrOut) Parameters:
HDBC hdbc UCHAR FAR * szConnStrIn SWORD cbConnStrIn UCHAR FAR * szConnStrOut SWORD cbConnStrOutMax
A handle to an HDBC as returned by the call to the SQLAllocConnect() function. The input connection string. The number of bytes in szConnStrIn. The output connection string. The number of bytes available in szConnStrOut.
SWORD FAR * pcbConnStrOut The count of the number of bytes actually used in szConnStrOut. Return Value: This function will return one of the following values:
SQL_SUCCESS
SQL_SUCCESS_WITH_INFO The function was successful, and more information is available. SQL_NEED_DATA SQL_ERROR SLQ_INVALID_HANDLE The function failed. More information in the input connect string was required than was supplied. Call SQLError() to get more information about the specific failure. The function failed. Call SQLError() to get more information about the specific failure. The function failed. The handle that was passed wasn't a valid handle. Possibly, the function that created the handle had failed and didn't return a valid handle.
If this function fails, your SQL function should end or the error should be corrected; the function should then be reexecuted. Usage: Call the SQLBrowseConnect() function to enumerate the attributes of a specific datasource. Always check the return code from this function for errors, making additional calls as necessary to gather the desired information to establish the connection. Notes:
vcg03.htm
When a return code of either SQL_SUCCESS or SQL_SUCCESS_WITH_INFO is returned, your application will know that the enumeration process has completed and the application is connected to the datasource.
SQLCancel()
Prototype:
HSTMT hstmt A statement handle returned by the call to SQLAllocStmt(). Return Value: This function will return one of the following values:
SQL_SUCCESS
SQL_SUCCESS_WITH_INFO The function was successful, and more information is available. SQL_ERROR SLQ_INVALID_HANDLE The function failed. Call SQLError() to get more information about the specific failure. The function failed. The handle that was passed wasn't a valid handle. Possibly, the function that created the handle had failed and didn't return a valid handle.
If this function fails, your SQL function should end or the error should be corrected; the function should then be reexecuted. Usage: Call the SQLCancel() function to cancel an asynchronous operation pending on the parameter statement indicated by hstmt. Always check the return code from this function for errors. Notes: You can cancel functions running on hstmt that are running on other threads. You also can cancel functions on hstmt that require more data.
vcg03.htm
SQLColAttributes()
Prototype:
RETCODE SQLColAttributes (HSTMT hstmt, UWORD icol, UWORD fDescType, PTR rgbDesc, SWORD cbDescMax, SWORD FAR * pcbDesc, SWORD FAR * pfDesc) Parameters:
HSTMT hstmt UWORD icol UWORD fDescType PTR rgbDesc SWORD cbDescMax
A statement handle returned by the call to SQLAllocStmt(). The index to the column in the table that the variable is being bound to. A valid descriptor. A pointer to the location in the application where the column's data is to be stored. The data type of rgbValue should be defined by fCDescType. The number of bytes in the storage location pointed to by rgbDesc. Usually, the C sizeof() operator can be passed for this parameter.
SWORD FAR * pcbDesc A pointer to an SDWORD variable that will receive the count of how many bytes in rgbDesc were used. SWORD FAR * pfDesc A pointer to an integer variable that will receive the results of a query which returns a numeric result.
Return Value: This function will return one of the following values:
SQL_SUCCESS
SQL_SUCCESS_WITH_INFO The function was successful, and more information is available. SQL_ERROR SLQ_INVALID_HANDLE The function failed. Call SQLError() to get more information about the specific failure. The function failed. The handle that was passed wasn't a valid handle. Possibly, the function that created the handle had failed and didn't return a valid handle.
If this function fails, your SQL function should end or the error should be corrected; the function should then be reexecuted.
vcg03.htm
Usage: Call the SQLColAttributes() function to gather information about a column in a table. Always check the return code from this function for errors. Notes: Table 3.1 shows the information that will be returned for columns.
Identifier
Description
SQL_COLUMN_AUTO_INCREMENT 1.0
Returns TRUE if the column is an autoincrement column and FALSE if it isn't. Only numeric columns can be auto increment. Values may be inserted into an auto-increment column, but the auto-increment column can't be updated. Returns TRUE if the column will be considered case-sensitive for sorts and comparisons. Columns that aren't character-based will return FALSE. Number of columns that are in the result set. The icol argument will be ignored. Returns the maximum number of characters positions that will be necessary to display data from the column. The column's label or title. As an example, an Access database may have a column called ZipCodes that could be labeled (or titled) "5Digit Zip Code." The column name is returned for columns that don't have specified labels or titles. For unnamed columns (such as those found in a text file datasource), an empty string is returned. The number of bytes of data that will be transferred on an SQLGetData() or SQLFetch() operation when the SQL_C_DEFAULT parameter is specified. Returns TRUE if the column is a money data type.
SQL_COLUMN_CASE_SENSITIVE
1.0
pfDesc
SQL_COLUMN_COUNT SQL_COLUMN_DISPLAY_SIZE
1.0 1.0
pfDesc pfDesc
SQL_COLUMN_LABEL
2.0
rgbDesc
SQL_COLUMN_LENGTH
1.0
pfDesc
SQL_COLUMN_MONEY
1.0
pfDesc
vcg03.htm
SQL_COLUMN_NAME
1.0
rgbDesc
Returns the column name. If the column is unnamed, an empty string is returned. See SQL_COLUMN_LABEL earlier in this table. Returns SQL_NO_NULLS if the column doesn't accept null values, or returns SQL_NULLABLE if the column accepts null values. Will return SQL_NULLABLE_UNKNOWN if it can't be determined whether the column accepts null values. Returns the name of the owner of the table that contains the specified column. When the datasource doesn't support owners (such as for xBase files) or the owner name can't be determined, an empty string will be returned. Returns the precision of the column on the datasource. Returns the table qualifier for the column. For datasources that don't support qualifiers or where the qualifier name can't be determined, an empty string will be returned. Returns the scale of the column on the datasource. Returns SQL_UNSEARCHABLE when the column can't be used in a WHERE clause. Returns SQL_LIKE_ONLY if the column can be used in a WHERE clause only with the LIKE predicate. When the column is a type SQL_LONGVARCHAR or SQL_LONGVARBINARY this is the usual return. Returns SQL_ALL_EXCEPT_LIKE if the column can be used in a WHERE clause with all comparison operators except LIKE. Returns SQL_SEARCHABLE if the column can be used in a WHERE clause with any comparison operator.
SQL_COLUMN_NULLABLE
1.0
pfDesc
SQL_COLUMN_OWNER_NAME
2.0
rgbDesc
SQL_COLUMN_PRECISION SQL_COLUMN_QUALIFIER_NAME
1.0 2.0
pfDesc rgbDesc
SQL_COLUMN_SCALE SQL_COLUMN_SEARCHABLE
1.0 1.0
pfDesc pfDesc
SQL_COLUMN_TABLE_NAME
2.0
rgbDesc
Returns the table name for the table that contains the column. When the table name can't be determined, an empty string is returned. Returns the SQL data type for the column.
SQL_COLUMN_TYPE
1.0
pfDesc
vcg03.htm
SQL_COLUMN_TYPE_NAME
1.0
rgbDesc
A character string indicating the data type of the column: CHAR, VARCHAR, MONEY, LONG VARBINARY, or CHAR ( ) FOR BIT DATA. When the data type is unknown, an empty string is returned. Returns TRUE if the column is either nonnumeric or is an unsigned numeric value. The column will be described with one of the following constants: SQL_ATTR_READONLY, SQL_ATTR_WRITE, SQL_ATTR_READWRITE_UNKNOWN, SQL_COLUMN_UPDATABLE, which describe how the column may be updated. When it can't be determined whether the column may be updatable, SQL_ATTR_READWRITE_UNKNOWN is typically returned.
SQL_COLUMN_UNSIGNED SQL_COLUMN_UPDATABLE
1.0 1.0
pfDesc pfDesc
SQLColumnPrivileges()
Prototype:
RETCODE SQLColumnPrivileges(HSTMT hstmt, UCHAR FAR * szTableQualifier, SWORD cbTableQualifier, UCHAR FAR * szTableOwner, SWORD cbTableOwner, UCHAR FAR * szTableName, SWORD cbTableName, UCHAR FAR * szColumnName, SWORD cbColumnName) Parameters:
HSTMT hstmt
UCHAR FAR * szTableQualifier The table qualifier. Use an empty string for tables that don't support table qualifiers. SWORD cbTableQualifier UCHAR FAR * szTableOwner The length of szTableQualifier. The table owner name. Use an empty string for tables that don't support table owners.
vcg03.htm
SWORD cbTableOwner UCHAR FAR * szTableName SWORD cbTableName UCHAR FAR * szColumnName SWORD cbColumnName Return Value:
The length of szTableOwner. The table name. The length of szTableName. The search pattern string (used for column names). The length of szColumnName.
SQL_SUCCESS
SQL_SUCCESS_WITH_INFO The function was successful, and more information is available. SQL_STILL_EXECUTING SQL_ERROR SLQ_INVALID_HANDLE An asynchronous event is still pending. The function failed. Call SQLError() to get more information about the specific failure. The function failed. The handle that was passed wasn't a valid handle. Possibly, the function that created the handle had failed and didn't return a valid handle.
If this function fails, your SQL function should end or the error should be corrected; the function should then be reexecuted. Usage: Call the SQLColumnPrivileges() function to obtain a list of columns and privileges for the specified table. Always check the return code from this function for errors. Notes: The information is returned as a result set.
SQLColumns()
Prototype:
RETCODE SQLColumns(HSTMT hstmt, UCHAR FAR * szTableQualifier, SWORD cbTableQualifier, UCHAR FAR * szTableOwner,
file:///H|/0-672-30913-0/vcg03.htm (16 of 116) [14/02/2003 03:06:00 ]
vcg03.htm
SWORD cbTableOwner, UCHAR FAR * szTableName, SWORD cbTableName, UCHAR FAR * szColumnName, SWORD cbColumnName) Parameters:
HSTMT hstmt
UCHAR FAR * szTableQualifier The table qualifier. Use an empty string for tables that don't support table qualifiers. SWORD cbTableQualifier UCHAR FAR * szTableOwner SWORD cbTableOwner UCHAR FAR * szTableName SWORD cbTableName UCHAR FAR * szColumnName SWORD cbColumnName Return Value: This function will return one of the following values: The length of szTableQualifier. The table owner name. Use an empty string for tables that don't support table owners. The length of szTableOwner. The table name. The length of szTableName. The search pattern string (used for column names). The length of szColumnName.
SQL_SUCCESS
SQL_SUCCESS_WITH_INFO The function was successful, and more information is available. SQL_ERROR SLQ_INVALID_HANDLE The function failed. Call SQLError() to get more information about the specific failure. The function failed. The handle that was passed wasn't a valid handle. Possibly, the function that created the handle had failed and didn't return a valid handle.
If this function fails, your SQL function should end or the error should be corrected; the function should then be reexecuted. Usage: Call the SQLColumns() function obtain a list of the columns in a specified table. Always check the return code from this function for errors.
vcg03.htm
Notes: The results are returned as a result set. The columns returned in the result set are shown in Table 3.2.
SQL_C_SSHORT Data Type SQL_C_CHAR SQL_C_SLONG SQL_C_SLONG Type Name Precision Length
SQLConnect()
Prototype:
RETCODE SQLConnect(HDBC hdbc, UCHAR FAR * szDSN, SWORD cbDSN, UCHAR FAR * szUID, SWORD cbUID, UCHAR FAR * szAuthStr, SWORD cbAuthStr) Parameters:
A handle to an HDBC as returned by the call to the SQLAllocConnect() function. A pointer to a string containing the datasource name.
vcg03.htm
The length of szDSN. A pointer to the string that contains the user's identifier. The length of szUID.
UCHAR FAR * szAuthStr The password or authentication string. SWORD cbAuthStr Return Value: This function will return one of the following values: The length of szAuthStr.
SQL_SUCCESS
SQL_SUCCESS_WITH_INFO The function was successful, and more information is available. SQL_ERROR SLQ_INVALID_HANDLE The function failed. Call SQLError() to get more information about the specific failure. The function failed. The handle that was passed wasn't a valid handle. Possibly, the function that created the handle had failed and didn't return a valid handle.
If this function fails, your SQL function should end or the error should be corrected; the function should then be reexecuted. Usage: Call the SQLConnect() function to establish a connection between the application and a specific datasource. Always check the return code from this function for errors. Notes: The SQLConnect() function tells ODBC to load the driver in preparation for using the datasource.
SQLDataSources()
Prototype:
RETCODE SQL (HENV henv, UWORD fDirection, UCHAR FAR * szDSN, SWORD cbDSNMax, SWORD FAR * pcbDSN, UCHAR FAR * szDescription,
vcg03.htm
Environment handle. This parameter is used to determine whether the driver manager will fetch the next datasource name in the list (use the SQL_FETCH_NEXT identifier) or whether the search starts from the beginning of the list (use the SQL_FETCH_FIRST identifier). A pointer to a storage buffer for the datasource name. Maximum length of the szDSN buffer. The maximum length supported by ODBC is SQL_MAX_DSN_LENGTH + 1. The total number of bytes returned in szDSN. If the returned string won't fit in szDSN, the datasource name is truncated to cbDSNMax 1 bytes. A pointer to storage buffer for the description string of the driver associated with the datasource. The szDescription buffer should be at least 255 bytes long. Driver descriptions might be dBASE or SQL Server. Maximum length of szDescription.
UCHAR FAR * szDSN SWORD cbDSNMax SWORD FAR * pcbDSN UCHAR FAR * szDescription
SWORD cbDescriptionMax
SWORD FAR * pcbDescription The total number of bytes returned in szDSN. If the returned string won't fit in szDescription, the description is truncated to cbDescriptionMax 1 bytes. Return Value: This function will return one of the following values:
SQL_SUCCESS
SQL_SUCCESS_WITH_INFO The function was successful, and more information is available. SQL_NO_DATA_FOUND SQL_ERROR SLQ_INVALID_HANDLE There were no datasources remaining. The function failed. Call SQLError() to get more information about the specific failure. The function failed. The handle that was passed wasn't a valid handle. Possibly, the function that created the handle had failed and didn't return a valid handle.
If this function fails, your SQL function should end or the error should be corrected; the function should then be reexecuted. Usage:
vcg03.htm
Call the SQLDataSources() function to enumerate a list of the currently installed datasources. Always check the return code from this function for errors. Notes: You should call the SQLDataSources() function in a loop while the application checks the return code. When the return code is SQL_NO_DATA_FOUND, then all the datasources have been enumerated.
SQLDescribeCol()
Prototype:
RETCODE SQL (HSTMT hstmt, UWORD icol, UCHAR FAR * szColName, SWORD cbColNameMax, SWORD FAR * pcbColName, SWORD FAR * pfSqlType, UDWORD FAR * pcbColDef, SWORD FAR * pibScale, SWORD FAR * pfNullable) Parameters:
A statement handle returned by the call to SQLAllocStmt(). The index to the column in the table to which the variable is being bound. A pointer to a string that will contain the column name. For columns that have no name, or when the name can't be determined, an empty string will be returned. The size of the szColName.
SWORD FAR * pcbColName The number of bytes returned in szColName. If the column name won't fit in szColName, the column name is truncated to cbColNameMax 1 bytes. SWORD FAR * pfSqlType The column's SQL data type. Use one of the constants shown in Table 3.3 for this parameter.
UDWORD FAR * pcbColDef The column's precision, or zero if the precision can't be determined. SWORD FAR * pibScale SWORD FAR * pfNullable The column's scale, or zero if the scale can't be determined. A constant that indicates whether this column supports null data values. Will be either SQL_NO_NULLS, SQL_NULLABLE, or SQL_NULLABLE_UNKNOWN.
vcg03.htm
Identifier SQL_BIGINT SQL_BINARY SQL_BIT SQL_CHAR SQL_DATE SQL_DECIMAL SQL_DOUBLE SQL_FLOAT SQL_INTEGER
Description Integer data Binary data Bit-field data Character data Data field Decimal data Double data Floating-point data Integer data
SQL_LONGVARBINARY Binary data SQL_LONGVARCHAR SQL_NUMERIC SQL_REAL SQL_SMALLINT SQL_TIME SQL_TIMESTAMP SQL_TINYINT SQL_VARBINARY SQL_VARCHAR Other Return Value: This function will return one of the following values: Variable-length character data Numeric data Floating-point data Integer data Time data Timestamp data Integer data Variable-length binary data Variable-length character data Driver-specific data
SQL_SUCCESS
SQL_SUCCESS_WITH_INFO The function was successful, and more information is available. SQL_STILL_EXECUTING SQL_ERROR An asynchronous operation is still pending. The function failed. Call SQLError() to get more information about the specific failure.
vcg03.htm
SLQ_INVALID_HANDLE
The function failed. The handle that was passed wasn't a valid handle. Possibly, the function that created the handle had failed and didn't return a valid handle.
If this function fails, your SQL function should end or the error should be corrected; the function should then be reexecuted. Usage: Call the SQLDescribeCol() function to get information about a specific column in a datasource. Always check the return code from this function for errors. Notes: If this function fails, use the SQLError() function to find out why.
SQLDescribeParam()
Prototype:
RETCODE SQLDescribeParam(HSTMT hstmt, UWORD ipar, SWORD FAR * pfSqlType, UWORD FAR * pcbColDef, SWORD FAR * pibScale, SWORD FAR * pfNullable) Parameters:
HSTMT hstmt UWORD ipar SWORD FAR * pfSqlType UDWORD FAR * pcbColDef SWORD FAR * pibScale SWORD FAR * pfNullable
A statement handle returned by the call to SQLAllocStmt(). The parameter marker index, ordered sequentially left to right. This index is onebased, not zero-based. A pointer to a variable that will be used to return the SQL type of the parameter. Valid SQL types are listed in Table 3.4. The column's precision. The column's scale. A constant that indicates whether this column supports null data values. Will be either SQL_NO_NULLS, SQL_NULLABLE, or SQL_NULLABLE_UNKNOWN.
vcg03.htm
Identifier SQL_BIGINT SQL_BINARY SQL_BIT SQL_CHAR SQL_DATE SQL_DECIMAL SQL_DOUBLE SQL_FLOAT SQL_INTEGER
Description Integer data Binary data Bit-field data Character data Data field Decimal data Double data Floating-point data Integer data
SQL_LONGVARBINARY Binary data SQL_LONGVARCHAR SQL_NUMERIC SQL_REAL SQL_SMALLINT SQL_TIME SQL_TIMESTAMP SQL_TINYINT SQL_VARBINARY SQL_VARCHAR Other Return Value: This function will return one of the following values: Variable-length character data Numeric data Floating-point data Integer data Time data Timestamp data Integer data Variable-length binary data Variable-length character data Driver-specific data
SQL_SUCCESS
SQL_SUCCESS_WITH_INFO The function was successful, and more information is available. SQL_STILL_EXECUTING An asynchronous operation is still pending.
vcg03.htm
SQL_ERROR SLQ_INVALID_HANDLE
The function failed. Call SQLError() to get more information about the specific failure. The function failed. The handle that was passed wasn't a valid handle. Possibly, the function that created the handle had failed and didn't return a valid handle.
If this function fails, your SQL function should end or the error should be corrected; the function should then be reexecuted. Usage: Call the SQLDescribeParam() function to obtain a description of the parameter marker in an SQL statement. Always check the return code from this function for errors. Notes: If this function fails, use the SQLError() function to find out why.
SQLDisconnect()
Prototype:
HDBC hdbc A handle to an HDBC as returned by the call to the SQLAllocConnect() function. Return Value: This function will return one of the following values:
SQL_SUCCESS
SQL_SUCCESS_WITH_INFO The function was successful, and more information is available. SQL_ERROR The function failed. Call SQLError() to get more information about the specific failure.
vcg03.htm
SLQ_INVALID_HANDLE
The function failed. The handle that was passed wasn't a valid handle. Possibly, the function that created the handle had failed and didn't return a valid handle.
If this function fails, your SQL function should end or the error should be corrected; the function should then be reexecuted. Usage: Call the SQLDisconnect() function to disconnect from the currently connected datasource. Always check the return code from this function for errors. Notes: If this function fails, use the SQLError() function to find out why.
SQLDriverConnect()
Prototype:
RETCODE SQLDriverConnect(HDBC hdbc, HWND hwnd, UCHAR FAR * szConnStrIn, SWORD cbConnStrIn, UCHAR FAR * szConnStrOut, SWORD cbConnStrOutMax, SWORD FAR * pcbConnStrOut, UWORD fDriverCompletion) Parameters:
A handle to an HDBC as returned by the call to the SQLAllocConnect() function. The window handle of the parent window. This is used if the SQLDriverConnect() function must display any dialog boxes to prompt the user for information (such as user ID or passwords). If a NULL pointer is specified, SQLDriverConnect() won't present any dialog boxes. A pointer to a full connection string, a partial connection string, or an empty string. Don't pass a NULL pointer. The length of szConnStrIn. A pointer to a buffer that will receive the resulting string that is used to connect to the datasource. This buffer should be at least 255 bytes or longer. The size of the szConnStrOut buffer.
UCHAR FAR * szConnStrIn SWORD cbConnStrIn UCHAR FAR * szConnStrOut SWORD cbConnStrOutMax
vcg03.htm
SWORD FAR * pcbConnStrOut A pointer to the variable that will receive the number of bytes that have been stored in szConnStrOut. If the buffer was too small to contain the connect string, the connect string in szConnStrOut is truncated to cbConnStrOutMax 1 bytes. UWORD fDriverCompletion Contains a flag that tells whether the driver manager or driver must prompt for more connection information. See Table 3.5 for valid values for this string.
Description The driver manager will display the datasources dialog box for the user to select the datasource. The driver will use the connection string specified by the application if the connection string contains the DSN keyword; otherwise, the same action as SQL_DRIVER_PROMPT is taken.
SQL_DRIVER_COMPLETE_REQUIRED The driver will use the connection string specified by the application if the connection string contains the DSN keyword; otherwise, the same action as SQL_DRIVER_PROMPT is taken. SQL_DRIVER_NOPROMPT The driver manager uses the connection string specified by the application.
Return Value: This function will return one of the following values:
SQL_SUCCESS
SQL_SUCCESS_WITH_INFO The function was successful, and more information is available. SQL_NO_DATA_FOUND SQL_ERROR SLQ_INVALID_HANDLE The command didn't find the driver to connect to. The function failed. Call SQLError() to get more information about the specific failure. The function failed. The handle that was passed wasn't a valid handle. Possibly, the function that created the handle had failed and didn't return a valid handle.
If this function fails, your SQL function should end or the error should be corrected; the function should then be reexecuted. Usage: Call the SQLDriverConnect() function to connect to a specific driver. Always check the return code from this function
file:///H|/0-672-30913-0/vcg03.htm (27 of 116) [14/02/2003 03:06:00 ]
vcg03.htm
for errors. Notes: If this function fails, use the SQLError() function to find out why.
SQLDrivers()
Prototype:
RETCODE SQLDrivers(HENV henv, UWORD fDirection, UCHAR FAR * szDriverDesc, SWORD cbDriverDescMax, SWORD FAR * pcbDriverDesc, UCHAR fAR * szDriverAttributes, SWORD cbDrvrAttrMax, SWORD FAR * pcbDrvrAttr) Parameters:
An environment handle, as returned by a call to SQLAllocEnv(). This parameter is used to determine whether the driver manager fetches the next, or first, driver description in the list. Use SQL_FETCH_NEXT or SQL_FETCH_FIRST. A pointer to a buffer for the driver description. The size of the szDriverDesc buffer. A pointer to a variable that will hold the number of bytes returned in szDriverDesc. If the size of the string returned is too large for szDriverDesc, the driver description in szDriverDesc will be truncated to cbDriverDescMax 1 bytes.
UCHAR FAR * szDriverAttributes A pointer to a buffer that will hold the list of driver attribute value pairs. SWORD cbDrvrAttrMax SWORD FAR * pcbDrvrAttr The size of the szDriverAttributes buffer. A pointer to a variable that will hold the number of bytes placed in szDriverAttributes. If the size of the string returned is too large for szDriverAttributes, the list of attribute value pairs in szDriverAttributes will be truncated to cbDrvrAttrMax 1 bytes.
Return Value: This function will return one of the following values:
vcg03.htm
SQL_SUCCESS
SQL_SUCCESS_WITH_INFO The function was successful, and more information is available. SQL_ERROR SLQ_INVALID_HANDLE The function failed. Call SQLError() to get more information about the specific failure. The function failed. The handle that was passed wasn't a valid handle. Possibly, the function that created the handle had failed and didn't return a valid handle.
If this function fails, your SQL function should end or the error should be corrected; the function should then be reexecuted. Usage: Call the SQLDrivers() function to list the drivers and driver attributes. Always check the return code from this function for errors. Notes: If this function fails, use the SQLError() function to find out why.
SQLError()
Prototype:
RETCODE SQLError(HENV henv, HDBC hdbc, HSTMT hstmt, UCHAR FAR * szSqlState, SDWORD FAR * pfNativeError, UCHAR FAR * szErrorMsg, SWORD cbErrorMsgMax, SWORD FAR * pcbErrorMsg) Parameters:
A handle to an HENV as returned by the call to the SQLAllocEnv() function, or SQL_NULL_HENV to query all open environments. A handle to an HDBC as returned by the call to the SQLAllocConnect() function, or SQL_NULL_HDBC to query all open database connections. A handle to a HSTMT as returned by the call to the SQLAllocStmt() function, or SQL_NULL_HSTMT to query all open statements.
vcg03.htm
SDWORD FAR * pfNativeError The returned native error code. UCHAR FAR * szErrorMsg SWORD cbErrorMsgMax SWORD FAR * pcbErrorMsg A buffer that will point to the error message text. A variable that specifies the maximum length of the szErrorMsg buffer. This buffer's size must be less than or equal to SQL_MAX_MESSAGE_LENGTH 1. A pointer to a variable that will contain the number of bytes in szErrorMsg. If the string is too large for szErrorMsg, the text in szErrorMsg will be truncated to cbErrorMsgMax 1 bytes.
Return Value: This function will return one of the following values:
SQL_SUCCESS
SQL_SUCCESS_WITH_INFO The function was successful, and more information is available. SQL_NO_DATA_FOUND SQL_ERROR SLQ_INVALID_HANDLE The command didn't find any error conditions on which to report. The function failed. Call SQLError() to get more information about the specific failure. The function failed. The handle that was passed wasn't a valid handle. Possibly, the function that created the handle had failed and didn't return a valid handle.
If this function fails, your SQL function should try to recoverdisplaying or saving whatever diagnostic information is appropriateor the error should be corrected and the function should be re-executed. Usage: Call the SQLError() function to obtain more information about the error condition. Always check the return code from this function for errors. Notes: If this function fails, use the SQLError() function to find out why.
SQLExecDirect()
Prototype:
vcg03.htm
HSTMT hstmt
UCHAR FAR * szSqlStr A pointer to an SQL statement to be executed. SDWORD cbSqlStr Return Value: This function will return one of the following values: The length of the string in szSqlStr.
SQL_SUCCESS
SQL_SUCCESS_WITH_INFO The function was successful, and more information is available. SQL_NEED_DATA SQL_ERROR SLQ_INVALID_HANDLE The function needs more information to process the request. The function failed. Call SQLError() to get more information about the specific failure. The function failed. The handle that was passed wasn't a valid handle. Possibly, the function that created the handle had failed and didn't return a valid handle.
If this function fails, your SQL function should end or the error should be corrected; the function should then be reexecuted. Usage: Call the SQLExecDirect() function to execute (usually once) a preparable SQL statement. Always check the return code from this function for errors. Notes: This function is probably the fastest way to execute SQL statements when the statement needs to be executed only one time. If this function fails, use the SQLError() function to find out why.
SQLExecute()
vcg03.htm
Prototype:
HSTMT hstmt A statement handle returned by the call to SQLAllocStmt(). Return Value: This function will return one of the following values:
SQL_SUCCESS
SQL_SUCCESS_WITH_INFO The function was successful, and more information is available. SQL_NEED_DATA SQL_STILL_EXECUTING SQL_ERROR SLQ_INVALID_HANDLE The function needs more information to process the request. An asynchronous operation is still pending. The function failed. Call SQLError() to get more information about the specific failure. The function failed. The handle that was passed wasn't a valid handle. Possibly, the function that created the handle had failed and didn't return a valid handle.
If this function fails, your SQL function should end or the error should be corrected; the function should then be reexecuted. Usage: Call the SQLExecute() function to execute a prepared statement using the parameter values contained in marker variables, if there are any. Always check the return code from this function for errors. Notes: If this function fails, use the SQLError() function to find out why.
SQLExtendedFetch()
Prototype:
file:///H|/0-672-30913-0/vcg03.htm (32 of 116) [14/02/2003 03:06:00 ]
vcg03.htm
RETCODE SQLExtenedeFetch(HSTMT hstmt, UWORD fFetchType, SDWORD irow, UDWORD FAR * pcrow, UWORD FAR * rgfRowStatus) Parameters:
A statement handle returned by the call to SQLAllocStmt(). Specifies the type of fetch desired. Table 3.6 describes valid values for this parameter. Specifies the number of the row to be fetched. Returns the count of the number of rows that were actually fetched.
Description Fetch the next row. Fetch the first row. Fetch the last row. Fetch the previous row. Fetch the row specified absolutely. Fetch the row specified relative to the current row.
SQL_FETCH_BOOKMARK Fetch the marked row. Return Value: This function will return one of the following values:
SQL_SUCCESS
SQL_SUCCESS_WITH_INFO The function was successful, and more information is available. SQL_NO_DATA_FOUND SQL_STILL_EXECUTING The command didn't find the driver to connect to. An asynchronous operation is still pending.
vcg03.htm
SQL_ERROR SLQ_INVALID_HANDLE
The function failed. Call SQLError() to get more information about the specific failure. The function failed. The handle that was passed wasn't a valid handle. Possibly, the function that created the handle had failed and didn't return a valid handle.
If this function fails, your SQL function should end or the error should be corrected; the function should then be reexecuted. Usage: Call the SQLExtendedFetch() function to fetch one or more rows from a result set. Always check the return code from this function for errors. Notes: If this function fails, use the SQLError() function to find out why.
SQLFetch()
Prototype:
HSTMT hstmt A statement handle returned by the call to SQLAllocStmt(). Return Value: This function will return one of the following values:
SQL_SUCCESS
SQL_SUCCESS_WITH_INFO The function was successful, and more information is available. SQL_NO_DATA_FOUND SQL_STILL_EXECUTING The command didn't find the driver to connect to. An asynchronous operation is still pending.
vcg03.htm
SQL_ERROR SLQ_INVALID_HANDLE
The function failed. Call SQLError() to get more information about the specific failure. The function failed. The handle that was passed wasn't a valid handle. Possibly, the function that created the handle had failed and didn't return a valid handle.
If this function fails, your SQL function should end or the error should be corrected; the function should then be reexecuted. Usage: Call the SQLFetch() function to fetch a single row of data from the result set. Always check the return code from this function for errors. Notes: If this function fails, use the SQLError() function to find out why.
SQLForeignKeys()
Prototype:
RETCODE SQLForeignKeys(HSTMT hstmt, UCHAR FAR * szPkTableQualifier, SWORD cbPkTableQualifier, UCHAR FAR * szPkTableOwner, SWORD cbPkTableOwner, UCHAR FAR * szPkTableName, SWORD cbPkTableName, UCHAR FAR * szFkTableQualifier, SWORD cbFkTableQualifier, UCHAR FAR * szFkTableOwner, SWORD cbFkTableOwner, UCHAR FAR * szFkTableName, SWORD cbFkTableName) Parameters:
HSTMT hstmt
UCHAR FAR * szPkTableQualifier A pointer to the primary key table qualifier. SWORD cbPkTableQualifier The length of szPkTableQualifier.
vcg03.htm
UCHAR FAR * szPkTableOwner SWORD cbPkTableOwner UCHAR FAR * szPkTableName SWORD cbPkTableName
A pointer to the primary key owner name. The length of szPkTableOwner. A pointer to the primary key table name. The length of szPkTableName.
UCHAR FAR * szFkTableQualifier A pointer to the foreign key table qualifier. SWORD cbFkTableQualifier UCHAR FAR * szFkTableOwner SWORD cbFkTableOwner UCHAR FAR * szFkTableName SWORD cbFkTableName Return Value: This function will return one of the following values: The length of szFkTableQualifier. A pointer to the foreign key owner name. The length of szFkTableOwner. A pointer to the foreign key table name. The length of szFkTableName.
SQL_SUCCESS
SQL_SUCCESS_WITH_INFO The function was successful, and more information is available. SQL_STILL_EXECUTING SQL_ERROR SLQ_INVALID_HANDLE An asynchronous operation is still pending. The function failed. Call SQLError() to get more information about the specific failure. The function failed. The handle that was passed wasn't a valid handle. Possibly, the function that created the handle had failed and didn't return a valid handle.
If this function fails, your SQL function should end or the error should be corrected; the function should then be reexecuted. Usage: Call the SQLForeignKeys() function to return either a list of foreign keys in the specified table or a list of foreign keys in other tables that refer to the primary key in the specified table. Always check the return code from this function for errors. Notes: If this function fails, use the SQLError() function to find out why.
SQLFreeConnect()
file:///H|/0-672-30913-0/vcg03.htm (36 of 116) [14/02/2003 03:06:00 ]
vcg03.htm
Prototype:
HDBC hdbc A handle to an HDBC as returned by the call to the SQLAllocConnect() function. Return Value: This function will return one of the following values: SQL_SUCCESS The function was successful. SQL_SUCCESS_WITH_INFO The function was successful, and more information is available. SQL_ERROR The function failed. Call SQLError() to get more information about the specific failure. SLQ_INVALID_HANDLE The function failed. The handle that was passed wasn't a valid handle. Possibly, the function that created the handle had failed and didn't return a valid handle. If this function fails, your SQL function should end or the error should be corrected; the function should then be reexecuted. Usage: Call the SQLFreeConnect() function to release the connection established with the SQLAllocConnect() function. Always check the return code from this function for errors. Notes: If this function fails, use the SQLError() function to find out why.
SQLFreeEnv()
Prototype:
file:///H|/0-672-30913-0/vcg03.htm (37 of 116) [14/02/2003 03:06:00 ]
vcg03.htm
HENV henv A handle to an HENV as returned by the call to the SQLAllocEnv() function. Return Value: This function will return one of the following values:
SQL_SUCCESS
SQL_SUCCESS_WITH_INFO The function was successful, and more information is available. SQL_ERROR SLQ_INVALID_HANDLE The function failed. Call SQLError() to get more information about the specific failure. The function failed. The handle that was passed wasn't a valid handle. Possibly, the function that created the handle had failed and didn't return a valid handle.
If this function fails, your SQL function should end or the error should be corrected; the function should then be reexecuted. Usage: Call the SQLFreeEnv() function to free the environment established by the SQLAllocEnv() function. Always check the return code from this function for errors. Notes: If this function fails, use the SQLError() function to find out why.
SQLFreeStmt()
Prototype:
vcg03.htm
Parameters:
HSTMT hstmt
Description Used to close the cursor associated with hstmt (if one was defined) and discard any pending results. Later, the application will be able to reopen the cursor by executing a SELECT statement again with the same or different parameter values. Used to release the hstmt, free all resources associated with it, close the cursor, and discard any rows that are pending. This option terminates all access to the hstmt. The hstmt may not be reused. Used to release any column buffers bound by SQLBindCol() for the given hstmt.
SQL_DROP
SQL_UNBIND
SQL_RESET_PARAMS Used to release all parameter buffers set by SQLBindParameter() for the given hstmt. Return Value: This function will return one of the following values:
SQL_SUCCESS
SQL_SUCCESS_WITH_INFO The function was successful, and more information is available. SQL_ERROR SLQ_INVALID_HANDLE The function failed. Call SQLError() to get more information about the specific failure. The function failed. The handle that was passed wasn't a valid handle. Possibly, the function that created the handle had failed and didn't return a valid handle.
If this function fails, your SQL function should end or the error should be corrected; the function should then be reexecuted. Usage: Call the SQLFreeStmt() function to free the statement handle allocated by the SQLAllocStmt() function. Always check the return code from this function for errors. Notes:
file:///H|/0-672-30913-0/vcg03.htm (39 of 116) [14/02/2003 03:06:00 ]
vcg03.htm
If this function fails, use the SQLError() function to find out why.
SQLGetConnectOption()
Prototype:
HDBC hdbc
UWORD fOption Specifies the option to retrieve. PTR pvParam The buffer where the value associated with fOption will be placed. This variable will be either a 32-bit integer value or a pointer to a null-terminated character string.
Return Value: This function will return one of the following values:
SQL_SUCCESS
SQL_SUCCESS_WITH_INFO The function was successful, and more information is available. SQL_NO_DATA_FOUND SQL_ERROR SLQ_INVALID_HANDLE The command didn't find the driver to connect to. The function failed. Call SQLError() to get more information about the specific failure. The function failed. The handle that was passed wasn't a valid handle. Possibly, the function that created the handle had failed and didn't return a valid handle.
If this function fails, your SQL function should end or the error should be corrected; the function should then be reexecuted. Usage: Call the SQLGetConnectOption() function to get the settings for the current connection. Always check the return code from this function for errors.
vcg03.htm
Notes: If this function fails, use the SQLError() function to find out why.
SQLGetCursorName()
Prototype:
RETCODE SQLCursorName(HSTMT hstmt, UCHAR FAR * szCursor, SWORD cbCursorMax, SWORD FAR * pcbCursor) Parameters:
A statement handle returned by the call to SQLAllocStmt(). A pointer to a buffer that will receive the cursor name. The length of szCursor.
SWORD FAR * pcbCursor A pointer to a variable that will have the length of szCursor stored in it. If the cursor name is too large for szCursor, the cursor name in szCursor is truncated to cbCursorMax 1 bytes. Return Value: This function will return one of the following values:
SQL_SUCCESS
SQL_SUCCESS_WITH_INFO The function was successful, and more information is available. SQL_ERROR SLQ_INVALID_HANDLE The function failed. Call SQLError() to get more information about the specific failure. The function failed. The handle that was passed wasn't a valid handle. Possibly, the function that created the handle had failed and didn't return a valid handle.
If this function fails, your SQL function should end or the error should be corrected; the function should then be reexecuted. Usage:
vcg03.htm
Call the SQLGetCursorName() function to get the name for the cursor specified by the hstmt parameter. Always check the return code from this function for errors. Notes: If this function fails, use the SQLError() function to find out why.
SQLGetData()
Prototype:
RETCODE SQLGetData(HSTMT hstmt, UWORD icol, SWORD fCType, PTR rgbValue, SDWORD cbValueMax, SDWORD FAR * pcbValue) Parameters:
A statement handle returned by the call to SQLAllocStmt(). The column number starting at column 1. Specifying a column number of 0 will retrieve a bookmark for the row. Neither ODBC 1.0 drivers nor SQLFetch() support bookmarks. A constant that specifies the column's resultant C data type values. See Table 3.8 for valid values. A pointer to location used to store the data. Specifies the length of the buffer rgbValue.
vcg03.htm
Date data. Floating-point data. Floating-point data. Integer data. Integer data. Integer data. Time-formatted data.
SQL_C_TIMESTAMP Timestamp formatted data. SQL_C_ULONG SQL_C_USHORT SQL_C_UTINYINT SQL_C_DEFAULT SQL_C_LONG SQL_C_SHORT SQL_C_TINYINT Integer data. Integer data. Integer data. This identifier specifies that data be converted to its default C data type. Integer data. Integer data. Integer data.
Drivers must support the final three C data type values listed in Table 3.8 from ODBC 1.0. Applications must use these values, rather than the ODBC 2.0 values, when calling an ODBC 1.0 driver.
Identifier
Description
SQL_NULL_DATA Specifies that the total number of bytes (excluding the null termination byte for character data) that was returned in the rgbValue parameter. SQL_NO_TOTAL SQL_NO_TOTAL For character data, if pcbValue is greater than or equal to cbValueMax, the data in rgbValue is truncated to cbValueMax 1 bytes and is null-terminated by the driver. For binary data, if pcbValue is equal to or greater than cbValueMax, the data in rgbValue is truncated to cbValueMax bytes.
Return Value: This function will return one of the following values:
vcg03.htm
SQL_SUCCESS
SQL_SUCCESS_WITH_INFO The function was successful, and more information is available. SQL_NO_DATA_FOUND SQL_STILL_EXECUTING SQL_ERROR SLQ_INVALID_HANDLE The command didn't find the requested data. An asynchronous operation is still pending. The function failed. Call SQLError() to get more information about the specific failure. The function failed. The handle that was passed wasn't a valid handle. Possibly, the function that created the handle had failed and didn't return a valid handle.
If this function fails, your SQL function should end or the error should be corrected; the function should then be reexecuted. Usage: Call the SQLGetData() function to obtain information about a specific column in a datasource. Always check the return code from this function for errors. Notes: If this function fails, use the SQLError() function to find out why.
SQLGetFunctions()
Prototype:
A handle to an HDBC as returned by the call to the SQLAllocConnect() function. Either the constant SQL_API_ALL_FUNCTIONS or a #defined value that will identify the ODBC function for which information is desired.
UWORD FAR * pfExists A pointer to a variable that will contain either a pointer to an array (if fFunction is SQL_API_ALL_FUNCTIONS) or the information returned for a specific function. Return Value:
file:///H|/0-672-30913-0/vcg03.htm (44 of 116) [14/02/2003 03:06:00 ]
vcg03.htm
SQL_SUCCESS
SQL_SUCCESS_WITH_INFO The function was successful, and more information is available. SQL_ERROR SLQ_INVALID_HANDLE The function failed. Call SQLError() to get more information about the specific failure. The function failed. The handle that was passed wasn't a valid handle. Possibly, the function that created the handle had failed and didn't return a valid handle.
If this function fails, your SQL function should end or the error should be corrected; the function should then be reexecuted. Usage: Call the SQLGetFunctions() function to obtain information about other SQL...() functions. Always check the return code from this function for errors. Notes: If this function fails, use the SQLError() function to find out why. The functions supported are shown in Table 3.10.
Function SQLGetFunctions SQLDataSources SQLDrivers SQL_API_SQLALLOCCONNECT SQL_API_SQLFETCH SQL_API_SQLALLOCENV SQL_API_SQLFREECONNECT SQL_API_SQLALLOCSTMT SQL_API_SQLFREEENV SQL_API_SQLBINDCOL SQL_API_SQLFREESTMT
Notes Implemented in the driver manager. Implemented in the driver manager. Implemented in the driver manager. ODBC core function. ODBC core function. ODBC core function. ODBC core function. ODBC core function. ODBC core function. ODBC core function. ODBC core function.
vcg03.htm
SQL_API_SQLCANCEL SQL_API_SQLGETCURSORNAME SQL_API_SQLCOLATTRIBUTES SQL_API_SQLNUMRESULTCOLS SQL_API_SQLCONNECT SQL_API_SQLPREPARE SQL_API_SQLDESCRIBECOL SQL_API_SQLROWCOUNT SQL_API_SQLDISCONNECT SQL_API_SQLSETCURSORNAME SQL_API_SQLERROR SQL_API_SQLSETPARAM SQL_API_SQLEXECDIRECT SQL_API_SQLTRANSACT SQL_API_SQLEXECUTE SQL_API_SQLBINDPARAMETER SQL_API_SQLSETPARAM SQL_API_SQLSETPARAM SQL_API_SQLBINDPARAMETER SQL_API_SQLBINDPARAMETER SQL_API_SQLGETTYPEINFO SQL_API_SQLCOLUMNS SQL_API_SQLPARAMDATA SQL_API_SQLDRIVERCONNECT SQL_API_SQLPUTDATA SQL_API_SQLGETCONNECTOPTION SQL_API_SQLSETCONNECTOPTION SQL_API_SQLGETDATA
ODBC core function. ODBC core function. ODBC core function. ODBC core function. ODBC core function. ODBC core function. ODBC core function. ODBC core function. ODBC core function. ODBC core function. ODBC core function. ODBC core function. ODBC core function. ODBC core function. ODBC core function. For ODBC 1.0 drivers, SQLGetFunctions() will return TRUE if the driver supports SQLSetParam(). For ODBC 1.0 drivers, SQLGetFunctions() will return TRUE if the driver supports SQLSetParam(). For ODBC 2.0 drivers, SQLGetFunctions() will return TRUE if the driver supports SQLBindParameter(). For ODBC 2.0 drivers, SQLGetFunctions() will return TRUE if the driver supports SQLBindParameter(). ODBC extension level 1 function. ODBC extension level 1 function. ODBC extension level 1 function. ODBC extension level 1 function. ODBC extension level 1 function. ODBC extension level 1 function. ODBC extension level 1 function. ODBC extension level 1 function. ODBC extension level 1 function.
vcg03.htm
SQL_API_SQLSETSTMTOPTION SQL_API_SQLGETFUNCTIONS SQL_API_SQLSPECIALCOLUMNS SQL_API_SQLGETINFO SQL_API_SQLSTATISTICS SQL_API_SQLGETSTMTOPTION SQL_API_SQLTABLES SQL_API_SQLBROWSECONNECT SQL_API_SQLNUMPARAMS SQL_API_SQLCOLUMNPRIVILEGES SQL_API_SQLPARAMOPTIONS SQL_API_SQLDATASOURCES SQL_API_SQLPRIMARYKEYS SQL_API_SQLDESCRIBEPARAM
ODBC extension level 1 function. ODBC extension level 1 function. ODBC extension level 1 function. ODBC extension level 1 function. ODBC extension level 1 function. ODBC extension level 1 function. ODBC extension level 1 function. ODBC extension level 2 function. ODBC extension level 2 function. ODBC extension level 2 function. ODBC extension level 2 function. ODBC extension level 2 function. ODBC extension level 2 function. ODBC extension level 2 function.
SQL_API_SQLPROCEDURECOLUMNS ODBC extension level 2 function. SQL_API_SQLDRIVERS SQL_API_SQLPROCEDURES SQL_API_SQLEXTENDEDFETCH SQL_API_SQLSETPOS SQL_API_SQLFOREIGNKEYS SQL_API_SQLSETSCROLLOPTIONS SQL_API_SQLMORERESULTS SQL_API_SQLTABLEPRIVILEGES SQL_API_SQLNATIVESQL ODBC extension level 2 function. ODBC extension level 2 function. ODBC extension level 2 function. ODBC extension level 2 function. ODBC extension level 2 function. ODBC extension level 2 function. ODBC extension level 2 function. ODBC extension level 2 function. ODBC extension level 2 function.
SQLGetInfo()
Prototype:
vcg03.htm
A handle to an HDBC as returned by the call to the SQLAllocConnect() function. The type of information that is desired. See the ODBC documentation for more information about this parameter. A pointer to a buffer used to store the information. The size of the buffer rgbInfoValue.
SWORD FAR * pcbInfoValue A pointer to a variable that will receive the count of the number of bytes stored in rgbInfoValue. Return Value: This function will return one of the following values:
SQL_SUCCESS
SQL_SUCCESS_WITH_INFO The function was successful, and more information is available. SQL_ERROR SLQ_INVALID_HANDLE The function failed. Call SQLError() to get more information about the specific failure. The function failed. The handle that was passed wasn't a valid handle. Possibly, the function that created the handle had failed and didn't return a valid handle.
If this function fails, your SQL function should end or the error should be corrected; the function should then be reexecuted. Usage: The SQLGetInfo() function returns a vast array of information about ODBC. Always check the return code from this function for errors. Notes: Refer to the documentation supplied with ODBC and Visual C++ for more information about this function. If this function fails, use the SQLError() function to find out why.
SQLGetStmtOption()
vcg03.htm
Prototype:
HSTMT hstmt
UWORD fOption Value that indicates which option to retrieve. PTR pvParam Return Value: This function will return one of the following values: A pointer to a buffer that will receive the option's value.
SQL_SUCCESS
SQL_SUCCESS_WITH_INFO The function was successful, and more information is available. SQL_ERROR SLQ_INVALID_HANDLE The function failed. Call SQLError() to get more information about the specific failure. The function failed. The handle that was passed wasn't a valid handle. Possibly, the function that created the handle had failed and didn't return a valid handle.
If this function fails, your SQL function should end or the error should be corrected; the function should then be reexecuted. Usage: Call the SQLGetStmtOption() function to retrieve information about the specified statement. Always check the return code from this function for errors. Notes: If this function fails, use the SQLError() function to find out why.
SQLGetTypeInfo()
vcg03.htm
Prototype:
HSTMT hstmt
SWORD fSqlType The SQL data type. This must be one of the identifiers from Table 3.11.
Identifier SQL_BIGINT SQL_BINARY SQL_BIT SQL_CHAR SQL_DATE SQL_DECIMAL SQL_DOUBLE SQL_FLOAT SQL_INTEGER
Description Integer data Binary data Bit-field data Character data Data field Decimal data Double data Floating-point data Integer data
SQL_LONGVARBINARY Binary data SQL_LONGVARCHAR SQL_NUMERIC SQL_REAL SQL_SMALLINT SQL_TIME SQL_TIMESTAMP SQL_TINYINT SQL_VARBINARY SQL_VARCHAR
file:///H|/0-672-30913-0/vcg03.htm (50 of 116) [14/02/2003 03:06:01 ]
Variable-length character data Numeric data Floating-point data Integer data Time data Timestamp data Integer data Variable-length binary data Variable-length character data
vcg03.htm
SQL_ALL_TYPES Other Return Value: This function will return one of the following values:
SQL_SUCCESS
SQL_SUCCESS_WITH_INFO The function was successful, and more information is available. SQL_STILL_EXECUTING SQL_ERROR SLQ_INVALID_HANDLE An asynchronous operation is still pending. The function failed. Call SQLError() to get more information about the specific failure. The function failed. The handle that was passed wasn't a valid handle. Possibly, the function that created the handle had failed and didn't return a valid handle.
If this function fails, your SQL function should end or the error should be corrected; the function should then be reexecuted. Usage: Call the SQLGetTypeInfo() function to return information about a specific data type. Always check the return code from this function for errors. Notes: This functions returns the results as a result set. If this function fails, use the SQLError() function to find out why.
SQLMoreResults()
Prototype:
vcg03.htm
HSTMT hstmt A statement handle returned by the call to SQLAllocStmt(). Return Value: This function will return one of the following values:
SQL_SUCCESS
SQL_SUCCESS_WITH_INFO The function was successful, and more information is available. SQL_NO_DATA_FOUND SQL_STILL_EXECUTING SQL_ERROR SLQ_INVALID_HANDLE The command didn't find the driver to connect to. An asynchronous operation is still pending. The function failed. Call SQLError() to get more information about the specific failure. The function failed. The handle that was passed wasn't a valid handle. Possibly, the function that created the handle had failed and didn't return a valid handle.
If this function fails, your SQL function should end or the error should be corrected; the function should then be reexecuted. Usage: Call the SQLMoreResults() function to determine whether there are more results available from the SELECT, UPDATE, INSERT, or DELETE SQL statements. If there are more results, SQLMoreResults() will initiate processing for the additional results. Always check the return code from this function for errors. Notes: If this function fails, use the SQLError() function to find out why.
SQLNativeSql()
Prototype:
RETCODE SQLNativeSql(HDBC hdbc, UCHAR FAR * szSqlStrIn, SDWORD cbSqlStrIn, UCHAR FAR * szSqlStr, SDWORD cbSqlStrMax, SDWORD FAR * pcbSqlStr)
vcg03.htm
Parameters:
HDBC hdbc UCHAR FAR * szSqlStrIn SDWORD cbSqlStrIn UCHAR FAR * szSqlStr SDWORD cbSqlStrMax
A handle to an HDBC as returned by the SQLAllocConnect() function. A pointer to the buffer holding the SQL statement that is to be translated. The length of the szSqlStrIn text string. A pointer to a buffer that will hold the translated SQL string. The sizeof the szSqlStr buffer.
SDWORD FAR * pcbSqlStr The number of bytes stored in szSqlStr. If the translated SQL string is too long to fit in szSqlStr, the translated SQL string in szSqlStr is truncated to cbSqlStrMax 1 bytes. Return Value: This function will return one of the following values:
SQL_SUCCESS
SQL_SUCCESS_WITH_INFO The function was successful, and more information is available. SQL_ERROR SLQ_INVALID_HANDLE The function failed. Call SQLError() to get more information about the specific failure. The function failed. The handle that was passed wasn't a valid handle. Possibly, the function that created the handle had failed and didn't return a valid handle.
If this function fails, your SQL function should end or the error should be corrected; the function should then be reexecuted. Usage: Call the SQLNativeSql() function to translate an SQL string for a native ODBC driver. Always check the return code from this function for errors. Notes: If this function fails, use the SQLError() function to find out why.
SQLNumParams()
Prototype:
vcg03.htm
HSTMT hstmt
SWORD FAR * pcpar A pointer to a variable that will hold the number of parameters in the statement. Return Value: This function will return one of the following values:
SQL_SUCCESS
SQL_SUCCESS_WITH_INFO The function was successful, and more information is available. SQL_STILL_EXECUTING SQL_ERROR SLQ_INVALID_HANDLE An asynchronous operation is still pending. The function failed. Call SQLError() to get more information about the specific failure. The function failed. The handle that was passed wasn't a valid handle. Possibly, the function that created the handle had failed and didn't return a valid handle.
If this function fails, your SQL function should end or the error should be corrected; the function should then be reexecuted. Usage: Call the SQLNumParams() function to get the number of parameters in the SQL statement. Always check the return code from this function for errors. Notes: If this function fails, use the SQLError() function to find out why.
SQLNumResultCols()
Prototype:
vcg03.htm
HSTMT hstmt
SWORD FAR * pccol A pointer to a variable that will hold the number of columns in the result set. Return Value: This function will return one of the following values:
SQL_SUCCESS
SQL_SUCCESS_WITH_INFO The function was successful, and more information is available. SQL_STILL_EXECUTING SQL_ERROR SLQ_INVALID_HANDLE An asynchronous operation is still pending. The function failed. Call SQLError() to get more information about the specific failure. The function failed. The handle that was passed wasn't a valid handle. Possibly, the function that created the handle had failed and didn't return a valid handle.
If this function fails, your SQL function should end or the error should be corrected; the function should then be reexecuted. Usage: Call the SQLNumResultCols() function to find out how many columns are in the result set. Always check the return code from this function for errors. Notes: If this function fails, use the SQLError() function to find out why.
SQLParamData()
Prototype:
vcg03.htm
HSTMT hstmt
PTR FAR * prgbValue A pointer to a buffer used to store the results returned by the SQLParamData() function. Return Value: This function will return one of the following values:
SQL_SUCCESS
SQL_SUCCESS_WITH_INFO The function was successful, and more information is available. SQL_NEED_DATA SQL_STILL_EXECUTING SQL_ERROR SLQ_INVALID_HANDLE The function needs more information to process the request. An asynchronous operation is still pending. The function failed. Call SQLError() to get more information about the specific failure. The function failed. The handle that was passed wasn't a valid handle. Possibly, the function that created the handle had failed and didn't return a valid handle.
If this function fails, your SQL function should end or the error should be corrected; the function should then be reexecuted. Usage: Call the SQLParamData() function with SQLPutData() to supply parameter data at statement execution time. Always check the return code from this function for errors. Notes: If this function fails, use the SQLError() function to find out why.
SQLParamOptions()
Prototype:
vcg03.htm
A statement handle returned by the call to SQLAllocStmt(). The number of values for the parameter. If crow is greater than 1, the rgbValue argument in SQLBindParameter() points to an array of parameter values, and pcbValue points to an array of lengths.
UDWORD FAR * pirow A pointer to a buffer used to store the current row number. Return Value: This function will return one of the following values:
SQL_SUCCESS
SQL_SUCCESS_WITH_INFO The function was successful, and more information is available. SQL_ERROR SLQ_INVALID_HANDLE The function failed. Call SQLError() to get more information about the specific failure. The function failed. The handle that was passed wasn't a valid handle. Possibly, the function that created the handle had failed and didn't return a valid handle.
If this function fails, your SQL function should end or the error should be corrected; the function should then be reexecuted. Usage: Call the SQLParamOptions() function to specify values for parameters created with SQLBindParameter(). Always check the return code from this function for errors. Notes: If this function fails, use the SQLError() function to find out why.
SQLPrepare()
Prototype:
vcg03.htm
HSTMT hstmt
UCHAR FAR * szSqlStr A pointer to the buffer containing the SQL text string. SDWORD cbSqlStr Return Value: This function will return one of the following values: The length of the string in szSqlStr.
SQL_SUCCESS
SQL_SUCCESS_WITH_INFO The function was successful, and more information is available. SQL_STILL_EXECUTING SQL_ERROR SLQ_INVALID_HANDLE An asynchronous operation is still pending. The function failed. Call SQLError() to get more information about the specific failure. The function failed. The handle that was passed wasn't a valid handle. Possibly, the function that created the handle had failed and didn't return a valid handle.
If this function fails, your SQL function should end or the error should be corrected; the function should then be reexecuted. Usage: Call the SQLPrepare() function to prepare an SQL string for execution. Always check the return code from this function for errors. Notes: If this function fails, use the SQLError() function to find out why.
SQLPrimaryKeys()
Prototype:
vcg03.htm
RETCODE SQLPrimaryKeys(HSTMT hstmt, UCHAR FAR * szTableQualifier, SWORD cbTableQualifier, UCHAR FAR * szTableOwner, SWORD cbTableOwner, UCHAR FAR * szTableName, SWORD cbTableName) Parameters:
HSTMT hstmt
UCHAR FAR * szTableQualifier A pointer to a buffer containing the qualifier name. SWORD cbTableQualifier UCHAR FAR * szTableOwner SWORD cbTableOwner UCHAR FAR * szTableName SWORD cbTableName Return Value: This function will return one of the following values: The length of the string in szTableQualifier. A pointer to a buffer containing the table owner. The length of the string in szTableOwner. A pointer to a buffer containing the table name. The length of the string in szTableName.
SQL_SUCCESS
SQL_SUCCESS_WITH_INFO The function was successful, and more information is available. SQL_STILL_EXECUTING SQL_ERROR SLQ_INVALID_HANDLE An asynchronous operation is still pending. The function failed. Call SQLError() to get more information about the specific failure. The function failed. The handle that was passed wasn't a valid handle. Possibly, the function that created the handle had failed and didn't return a valid handle.
If this function fails, your SQL function should end or the error should be corrected; the function should then be reexecuted. Usage: Call the SQLPrimaryKeys() function to retrieve the column names that comprise the primary key for the specified table. Always check the return code from this function for errors. Notes:
vcg03.htm
If this function fails, use the SQLError() function to find out why.
SQLProcedureColumns()
Prototype:
RETCODE SQLProcedureColumns(HSTMT hstmt, UCHAR FAR * szProcQualifier, SWORD cbProcQualifier, UCHAR FAR * szProcOwner, SWORD cbProcOwner, UCHAR FAR * szProcName, SWORD cbProcName, UCHAR FAR * szColumnName, SWORD cbColumnName) Parameters:
HSTMT hstmt
UCHAR FAR * szProcQualifier A pointer to a buffer containing the procedure qualifier name. SWORD cbProcQualifier UCHAR FAR * szProcOwner SWORD cbProcOwner UCHAR FAR * szProcName SWORD cbProcName The length of the string contained in szProcQualifier. A pointer to a buffer containing the string search pattern for procedure owner names. The length of the string contained in szProcOwner. A pointer to a buffer containing the string search pattern for procedure names. The length of the string in szProcName.
UCHAR FAR * szColumnName A pointer to a buffer containing the string search pattern for column names. SWORD cbColumnName Return Value: This function will return one of the following values: The length of the string in szColumnName.
SQL_SUCCESS
SQL_SUCCESS_WITH_INFO The function was successful, and more information is available. SQL_STILL_EXECUTING An asynchronous operation is still pending.
vcg03.htm
SQL_ERROR SLQ_INVALID_HANDLE
The function failed. Call SQLError() to get more information about the specific failure. The function failed. The handle that was passed wasn't a valid handle. Possibly, the function that created the handle had failed and didn't return a valid handle.
If this function fails, your SQL function should end or the error should be corrected; the function should then be reexecuted. Usage: Call the SQLProcedureColumns() function to obtain a list of input and output parameters for the specified procedure. This function will also retrieve the columns that make up the result set for this procedure. Always check the return code from this function for errors. Notes: If this function fails, use the SQLError() function to find out why.
SQLProcedures()
Prototype:
RETCODE SQLProcedures(HSTMT hstmt, UCHAR FAR * szProcQualifier, SWORD cbProcQualifier, UCHAR FAR * szProcOwner, SWORD cbProcOwner, UCHAR FAR * szProcName, SWORD cbProcName) Parameters:
HSTMT hstmt
UCHAR FAR * szProcQualifier A pointer to the buffer that contains the procedure qualifier. SWORD cbProcQualifier UCHAR FAR * szProcOwner SWORD cbProcOwner UCHAR FAR * szProcName The length of the string in szProcQualifier. A pointer to the buffer containing the string search pattern for procedure owner names. The length of the string in szProcOwner. A pointer to the buffer containing the string search pattern for procedure names.
vcg03.htm
SQL_SUCCESS
SQL_SUCCESS_WITH_INFO The function was successful, and more information is available. SQL_STILL_EXECUTING SQL_ERROR SLQ_INVALID_HANDLE An asynchronous operation is still pending. The function failed. Call SQLError() to get more information about the specific failure. The function failed. The handle that was passed wasn't a valid handle. Possibly, the function that created the handle had failed and didn't return a valid handle.
If this function fails, your SQL function should end or the error should be corrected; the function should then be reexecuted. Usage: Call the SQLProcedures() function to retrieve a list of all the procedure names stored in the specified datasource. Always check the return code from this function for errors. Notes: If this function fails, use the SQLError() function to find out why.
SQLPutData()
Prototype:
HSTMT hstmt
vcg03.htm
PTR rgbValue
A pointer to a buffer used to store the actual data for the parameter or column. The data's type must be a C data type as specified in the SQLBindParameter() or SQLBindCol().
SDWORD cbValue The length of the data in rgbValue. Return Value: This function will return one of the following values:
SQL_SUCCESS
SQL_SUCCESS_WITH_INFO The function was successful, and more information is available. SQL_STILL_EXECUTING SQL_ERROR SLQ_INVALID_HANDLE An asynchronous operation is still pending. The function failed. Call SQLError() to get more information about the specific failure. The function failed. The handle that was passed wasn't a valid handle. Possibly, the function that created the handle had failed and didn't return a valid handle.
If this function fails, your SQL function should end or the error should be corrected; the function should then be reexecuted. Usage: Call the SQLPutData() function to send data for a parameter or column to the driver at execution time. Always check the return code from this function for errors. Notes: If this function fails, use the SQLError() function to find out why.
SQLRowCount()
Prototype:
vcg03.htm
HSTMT hstmt
SDWORD FAR * pcrow A pointer to a variable that will typically hold the number of rows affected by the request, or 1 if the number of affected rows isn't available. Return Value: This function will return one of the following values:
SQL_SUCCESS
SQL_SUCCESS_WITH_INFO The function was successful, and more information is available. SQL_ERROR SLQ_INVALID_HANDLE The function failed. Call SQLError() to get more information about the specific failure. The function failed. The handle that was passed wasn't a valid handle. Possibly, the function that created the handle had failed and didn't return a valid handle.
If this function fails, your SQL function should end or the error should be corrected; the function should then be reexecuted. Usage: Call the SQLRowCount() function to determine how many rows were affected by an UPDATE, DELETE, or INSERT, or by an SQL_UPDATE, SQL_DELETE, or SQL_ADD operation. Always check the return code from this function for errors. Notes: If this function fails, use the SQLError() function to find out why.
SQLSetConnectOption()
Prototype:
vcg03.htm
A handle to an HDBC as returned by the call to the SQLAllocConnect() function. The option to set. See the ODBC documentation for more details.
UDWORD vParam The value to be associated with the option specified in foption. Return Value: This function will return one of the following values:
SQL_SUCCESS
SQL_SUCCESS_WITH_INFO The function was successful, and more information is available. SQL_ERROR SLQ_INVALID_HANDLE The function failed. Call SQLError() to get more information about the specific failure. The function failed. The handle that was passed wasn't a valid handle. Possibly, the function that created the handle had failed and didn't return a valid handle.
If this function fails, your SQL function should end or the error should be corrected; the function should then be reexecuted. Usage: Call the SQLSetConnectOption() function to set connection options. Always check the return code from this function for errors. Notes: The SQLSetConnectOption() has many valid parameter values, which are detailed in the ODBC documentation. If this function fails, use the SQLError() function to find out why.
SQLSetCursorName()
Prototype:
vcg03.htm
HSTMT hstmt
UCHAR FAR * szCursor A pointer to a buffer containing the cursor name. SWORD cbCursor Return Value: This function will return one of the following values: The length of the string contained in szCursor.
SQL_SUCCESS
SQL_SUCCESS_WITH_INFO The function was successful, and more information is available. SQL_ERROR SLQ_INVALID_HANDLE The function failed. Call SQLError() to get more information about the specific failure. The function failed. The handle that was passed wasn't a valid handle. Possibly, the function that created the handle had failed and didn't return a valid handle.
If this function fails, your SQL function should end or the error should be corrected; the function should then be reexecuted. Usage: Call the SQLSetCursorName() function to associate a name with the specified hstmt. Always check the return code from this function for errors. Notes: If your application doesn't set cursor names, the driver will automatically generate default cursor names. If this function fails, use the SQLError() function to find out why.
SQLSetPos()
Prototype:
RETCODE SQLSetPos(HSTMT hstmt, UWORD irow, UWORD fOption, UWORD fLock) Parameters:
vcg03.htm
A statement handle returned by the call to SQLAllocStmt(). Indicates the position of the row in the rowset on which to perform the operation specified. If 0, the operation is applied to every row in the rowset.
UWORD fOption The operation to perform: SQL_POSITION, SQL_REFRESH, SQL_UPDATE, SQL_DELETE, or SQL_ADD. UWORD fLock Tells how the row is to be locked after the operation has been performed. Values include SQL_LOCK_NO_CHANGE, SQL_LOCK_EXCLUSIVE, and SQL_LOCK_UNLOCK.
Return Value: This function will return one of the following values:
SQL_SUCCESS
SQL_SUCCESS_WITH_INFO The function was successful, and more information is available. SQL_NEED_DATA SQL_STILL_EXECUTING SQL_ERROR SLQ_INVALID_HANDLE The function needs more information to process the request. An asynchronous operation is still pending. The function failed. Call SQLError() to get more information about the specific failure. The function failed. The handle that was passed wasn't a valid handle. Possibly, the function that created the handle had failed and didn't return a valid handle.
If this function fails, your SQL function should end or the error should be corrected; the function should then be reexecuted. Usage: Call the SQLSetPos() function to set the cursor position in a result set. Always check the return code from this function for errors. Notes: If this function fails, use the SQLError() function to find out why.
SQLSetScrollOptions()
Prototype:
vcg03.htm
RETCODE SQLSetScrollOptions(HSTMT hstmt, UWORD fConcurrency, UWORD crowKeyset, UWORD crowRowset) Parameters:
HSTMT hstmt
UWORD fConcurrency Parameter to specify the cursor's concurrence. Valid values are listed in Table 3.12. UWORD crowKeyset Specifies the number of rows for which to buffer keys. Either greater than or equal to crowRowset, or must be one of the following: SQL_SCROLL_FORWARD_ONLY, SQL_SCROLL_STATIC, SQL_SCROLL_KEYSET_DRIVEN, or SQL_SCROLL_DYNAMIC. Specifies the number of rows in a rowset. The crowRowset parameter defines the number of rows that will be fetched by each call to SQLExtendedFetch() and the number of rows that the application buffers.
UWORD crowRowset
Identifier
Description
SQL_CONCUR_READ_ONLY The cursor is read-only. SQL_CONCUR_LOCK SQL_CONCUR_ROWVER SQL_CONCUR_VALUES Return Value: This function will return one of the following values: The cursor uses the lowest level of locking sufficient to ensure that the row can be updated. The cursor uses optimistic concurrency control. The cursor uses optimistic concurrency control.
SQL_SUCCESS
SQL_SUCCESS_WITH_INFO The function was successful, and more information is available. SQL_ERROR SLQ_INVALID_HANDLE The function failed. Call SQLError() to get more information about the specific failure. The function failed. The handle that was passed wasn't a valid handle. Possibly, the function that created the handle had failed and didn't return a valid handle.
vcg03.htm
If this function fails, your SQL function should end or the error should be corrected; the function should then be reexecuted. Usage: The SQLSetScrollOptions() function should be used only with ODBC 1.x drivers. For ODBC 2, use SQLSetStmtOption(). Always check the return code from this function for errors. Notes: If this function fails, use the SQLError() function to find out why.
SQLSetStmtOption()
Prototype:
A statement handle returned by the call to SQLAllocStmt(). Specifies the option that is to be set. Valid options include SQL_ASYNC_ENABLE, SQL_BIND_TYPE, SQL_CONCURENCY, SQL_CURSOR_TYPE, SQL_KEYSET_SIZE, SQL_MAX_LENGTH, SQL_MAX_ROWS, SQL_NOSCAN, SQL_QUERY_TIMEOUT, SQL_RETRIEVE_DATA, SQL_ROWSET_SIZE, SQL_SIMULATE_CURSOR, and SQL_USE_BOOKMARKS.
UDWORD vParam The value to which the option is to be set. Return Value: This function will return one of the following values:
SQL_SUCCESS
SQL_SUCCESS_WITH_INFO The function was successful, and more information is available. SQL_ERROR The function failed. Call SQLError() to get more information about the specific failure.
vcg03.htm
SLQ_INVALID_HANDLE
The function failed. The handle that was passed wasn't a valid handle. Possibly, the function that created the handle had failed and didn't return a valid handle.
If this function fails, your SQL function should end or the error should be corrected; the function should then be reexecuted. Usage: Call the SQLSetStmtOption() function to set statement options. Always check the return code from this function for errors. Notes: If this function fails, use the SQLError() function to find out why.
SQLSpecialColumns()
Prototype:
RETCODE SQLSpecialColumns(HSTMT hstmt, UWORD fColType, UCHAR FAR * szTableQualifier, SWORD cbTableQualifier, UCHAR FAR * szTableOwner, SWORD cbTableOwner, UCHAR FAR * szTableName, SWORD cbTableName, UWORD fScope, UWORD fNullable) Parameters:
A statement handle returned by the call to SQLAllocStmt(). Specifies the type of column to return. Must be either SQL_BEST_ROWID or SQL_ROWVER.
UCHAR FAR * szTableQualifier A pointer to a buffer containing the qualifier name for the table. SWORD cbTableQualifier UCHAR FAR * szTableOwner SWORD cbTableOwner The length of the string in szTableQualifier. A pointer to a buffer containing the owner name for the table. The length of the string in szTableOwner.
vcg03.htm
UCHAR FAR * szTableName SWORD cbTableName UWORD fScope UWORD fNullable Return Value:
A pointer to a buffer containing the table name. The length of the string in szTableName. The minimum required scope of the row ID. Specifies when to return special columns that can have a NULL value.
SQL_SUCCESS
SQL_SUCCESS_WITH_INFO The function was successful, and more information is available. SQL_STILL_EXECUTING SQL_ERROR SLQ_INVALID_HANDLE An asynchronous operation is still pending. The function failed. Call SQLError() to get more information about the specific failure. The function failed. The handle that was passed wasn't a valid handle. Possibly, the function that created the handle had failed and didn't return a valid handle.
If this function fails, your SQL function should end or the error should be corrected; the function should then be reexecuted. Usage: Call the SQLSpecialColumns() function to retrieve information about the optimal set of columns that will uniquely identify a row in a table, or columns that are automatically updated when a value in the row has been updated by transactions. Always check the return code from this function for errors. Notes: If this function fails, use the SQLError() function to find out why.
SQLStatistics()
Prototype:
RETCODE SQLStatistics(HSTMT hstmt, UCHAR FAR * szTableQualifier, SWORD cbTableQualifier, UCHAR FAR * szTableOwner, SWORD cbTableOwner,
file:///H|/0-672-30913-0/vcg03.htm (71 of 116) [14/02/2003 03:06:01 ]
vcg03.htm
UCHAR FAR * szTableName, SWORD cbTableName, UWORD fUnique, UWORD fAccuracy) Parameters:
HSTMT hstmt
UCHAR FAR * szTableQualifier A pointer to a buffer containing the qualifier name. SWORD cbTableQualifier UCHAR FAR * szTableOwner SWORD cbTableOwner UCHAR FAR * szTableName SWORD cbTableName UWORD fUnique UWORD fAccuracy The length of the string in szTableQualifier. A pointer to a buffer containing the owner name. The length of the string in szTableOwner. A pointer to a buffer containing the table name. The length of the string in szTableName. Indicates the index type; either SQL_INDEX_UNIQUE or SQL_INDEX_ALL. Specifies the importance of the CARDINALITY and PAGES columns in the result set. Use SQL_ENSURE to request that the driver retrieve the statistics unconditionally. Use SQL_QUICK to request that the driver retrieve results only if they are readily available from the server.
Return Value: This function will return one of the following values:
SQL_SUCCESS
SQL_SUCCESS_WITH_INFO The function was successful, and more information is available. SQL_STILL_EXECUTING SQL_ERROR SLQ_INVALID_HANDLE An asynchronous operation is still pending. The function failed. Call SQLError() to get more information about the specific failure. The function failed. The handle that was passed wasn't a valid handle. Possibly, the function that created the handle had failed and didn't return a valid handle.
If this function fails, your SQL function should end or the error should be corrected; the function should then be reexecuted. Usage: Call the SQLStatistics() function to retrieve a list of statistics regarding a specified single table. Always check the return code from this function for errors.
vcg03.htm
Notes: If this function fails, use the SQLError() function to find out why.
SQLTablePrivileges()
Prototype:
RETCODE SQLTablePrivileges(HSTMT hstmt, UCHAR FAR * szTableQualifier, SWORD cbTableQualifier, UCHAR FAR * szTableOwner, SWORD cbTableOwner, UCHAR FAR * szTableName, SWORD cbTableName) Parameters:
HSTMT hstmt
UCHAR FAR * szTableQualifier A pointer to a buffer containing the table qualifier. SWORD cbTableQualifier UCHAR FAR * szTableOwner SWORD cbTableOwner UCHAR FAR * szTableName SWORD cbTableName Return Value: This function will return one of the following values: The length of the string in szTableQualifier. A pointer to a buffer containing the string search pattern for owner names. The length of the string in szTableOwner. A pointer to a string search pattern for table names. The length of the string in szTableName.
SQL_SUCCESS
SQL_SUCCESS_WITH_INFO The function was successful, and more information is available. SQL_STILL_EXECUTING SQL_ERROR An asynchronous operation is still pending. The function failed. Call SQLError() to get more information about the specific failure.
vcg03.htm
SLQ_INVALID_HANDLE
The function failed. The handle that was passed wasn't a valid handle. Possibly, the function that created the handle had failed and didn't return a valid handle.
If this function fails, your SQL function should end or the error should be corrected; the function should then be reexecuted. Usage: Call the SQLTablePrivileges() function to return a list of tables and privileges for each table in the list. Always check the return code from this function for errors. Notes: The SQLTablePrivileges() function returns the results in the form of a result set. If this function fails, use the SQLError() function to find out why.
SQLTables()
Prototype:
RETCODE SQLTables(HSTMT hstmt, UCHAR FAR * szTableQualifier, SWORD cbTableQualifier, UCHAR FAR * szTableOwner, SWORD cbTableOwner, UCHAR FAR * szTableName, SWORD cbTableName, UCHAR FAR * szTableType, SWORD cbTableType) Parameters:
HSTMT hstmt
UCHAR FAR * szTableQualifier A pointer to a buffer containing the qualifier name. SWORD cbTableQualifier UCHAR FAR * szTableOwner SWORD cbTableOwner UCHAR FAR * szTableName The length of the string in szTableQualifier. A pointer to a buffer containing the string search pattern for owner names. The length of the string in szTableOwner. A pointer to a buffer containing the string search pattern for table names.
vcg03.htm
The length of the string in szTableName. A pointer to a buffer containing a list of table types to match. The length of the string in szTableType.
SQL_SUCCESS
SQL_SUCCESS_WITH_INFO The function was successful, and more information is available. SQL_STILL_EXECUTING SQL_ERROR SLQ_INVALID_HANDLE An asynchronous operation is still pending. The function failed. Call SQLError() to get more information about the specific failure. The function failed. The handle that was passed wasn't a valid handle. Possibly, the function that created the handle had failed and didn't return a valid handle.
If this function fails, your SQL function should end or the error should be corrected; the function should then be reexecuted. Usage: Call the SQLTables() function to obtain a list of tables in the datasource. Always check the return code from this function for errors. Notes: If this function fails, use the SQLError() function to find out why.
SQLTransact()
Prototype:
vcg03.htm
A handle to an HENV as returned by the call to the SQLAllocEnv() function. A handle to an HDBC as returned by the call to the SQLAllocConnect() function.
UWORD fType Either SQL_COMMIT or SQL_ROLLBACK. Return Value: This function will return one of the following values:
SQL_SUCCESS
SQL_SUCCESS_WITH_INFO The function was successful, and more information is available. SQL_ERROR SLQ_INVALID_HANDLE The function failed. Call SQLError() to get more information about the specific failure. The function failed. The handle that was passed wasn't a valid handle. Possibly, the function that created the handle had failed and didn't return a valid handle.
If this function fails, your SQL function should end or the error should be corrected; the function should then be reexecuted. Usage: Call the SQLTransact() function to commit or rollback the transaction. Always check the return code from this function for errors. Notes: If this function fails, use the SQLError() function to find out why.
Using a Datasource
file:///H|/0-672-30913-0/vcg03.htm (76 of 116) [14/02/2003 03:06:01 ]
vcg03.htm
The first example is a simple function that accesses a table, determines what columns are in the accessed table, and then fetches data from the datasource. Although this routine is a simple implementation, it can form the basis of a number of useful applications. First, look at the sequence of operation in accessing an SQL datasource. Unlike most other statements, the SQL...() statements require a rather fixed sequence of execution. You simply can't call the SQLExecute() function without first setting up the SQL environment. The most basic part of any data access application is the initialization of the SQL environment. Listing 3.1 shows a simple function that initializes the ODBC SQL environment. This function gets the names of the columns in the datasource that the user selects and then fetches the first four columns in this datasource. The routine would be better if it were to check to make sure that there were more than four columns in the datasource. I also have coded checks for errors, but I didn't call error handlers in this example. A later example shows a simple error handler routine that can be used by most ODBC functions to display information to the developer and the user. Listing 3.1. INITODBC.C: A simple function that initializes the ODBC SQL environment.
//
INITODBC.C
#include "windows.h" #include "odbcmisc.h" #include "sql.h" #include "sqlext.h" #include <stdio.h> #include <stdlib.h> #include <string.h> #include <ctype.h> /****************************************************************************** ** ** ** ** ** ** INPUTS: VARIOUS FUNCTION: Open DataBase Connectivity interface code TITLE: ODBC1.c
vcg03.htm
** ** ** ** ** ** ** ** ** ** ** ******************************************************************************/ // Static, for this module only: // Routines: void SimpleODBC(HWND hWnd) { #define STR_LEN 128+1 #define REM_LEN 254+1 /* Declare storage locations for result set data */ UCHAR UCHAR UCHAR szQualifier[STR_LEN], szOwner[STR_LEN]; szTableName[STR_LEN], szColName[STR_LEN]; szTypeName[STR_LEN], szRemarks[REM_LEN]; COPYRIGHT 1995 BY PETER D. HIPSON, All rights reserved. AUTHOR: Peter D. Hipson CALLS: ODBC routines: SQL...() RETURNS: YES OUTPUTS: VARIOUS
/* Declare storage locations for bytes available to return */ SDWORD cbQualifier, cbOwner, cbTableName, cbColName; SDWORD cbTypeName, cbRemarks, cbDataType, cbPrecision;
file:///H|/0-672-30913-0/vcg03.htm (78 of 116) [14/02/2003 03:06:01 ]
vcg03.htm
SDWORD cbLength, cbScale, cbRadix, cbNullable; char char char // char char char char char char char int int HENV HDBC HSTMT j; henv; hdbc; hstmt = SQL_NULL_HSTMT; szSource[60]; szDirectory[132]; szTable[60]; Keep above, delete below... szDSN[256]; szConStrOut[256]; szBuffer[513]; szColumn1[128]; szColumn2[128]; szColumn3[128]; szColumn4[128]; i;
//
vcg03.htm
//
table and directory that the user selects. GetODBC( szSource, sizeof(szSource), szTable, sizeof(szTable), szDirectory, sizeof(szDirectory)); SQLAllocEnv(&henv); SQLAllocConnect(henv, &hdbc); RC = SQLDriverConnect(hdbc, hWnd, (unsigned char far *)szDSN, SQL_NTS, (unsigned char far *)szConStrOut, sizeof(szConStrOut), (short far *)&nConStrOut, SQL_DRIVER_COMPLETE); if (RC != SQL_SUCCESS && RC != SQL_SUCCESS_WITH_INFO) {// Call whatever error handler your application uses } else {// Connect was successful. Just continue in most cases } RC = SQLAllocStmt(hdbc, &hstmt); if (RC != SQL_SUCCESS && RC != SQL_SUCCESS_WITH_INFO) {// Could not allocate the statement! Call an error handler: }
//
Get the DBMS version string. Just for our information, it is not used. SQLGetInfo(hdbc, SQL_DBMS_VER, szConStrOut, sizeof(szConStrOut), &swReturn);
//
Get the columns in the specified table: RC = SQLColumns(hstmt, NULL, 0, // All qualifiers
vcg03.htm
NULL, 0,
// All columns
if (RC != SQL_SUCCESS && RC != SQL_SUCCESS_WITH_INFO) {// Could not determine columns! Call an error handler: } // Now bind variables to columns! SQLBindCol(hstmt, 1, SQLBindCol(hstmt, 2, SQLBindCol(hstmt, 3, SQLBindCol(hstmt, 4, SQLBindCol(hstmt, 5, SQLBindCol(hstmt, 6, SQLBindCol(hstmt, 7, SQLBindCol(hstmt, 8, SQLBindCol(hstmt, 9, SQL_C_CHAR, SQL_C_CHAR, SQL_C_CHAR, SQL_C_CHAR, szQualifier, STR_LEN,&cbQualifier); szOwner, STR_LEN, &cbOwner); szTableName, STR_LEN,&cbTableName); szColName, STR_LEN, &cbColName);
SQL_C_SSHORT, &DataType, 0, &cbDataType); SQL_C_CHAR, SQL_C_SLONG, SQL_C_SLONG, szTypeName, STR_LEN, &cbTypeName); &Precision, 0, &cbPrecision); &Length, 0, &cbLength);
SQLBindCol(hstmt, 10, SQL_C_SSHORT, &Radix, 0, &cbRadix); SQLBindCol(hstmt, 11, SQL_C_SSHORT, &Nullable, 0, &cbNullable); SQLBindCol(hstmt, 12, SQL_C_CHAR, // Then get the column names: while(TRUE) {// Do till we break out: RC = SQLFetch(hstmt); if (RC == SQL_NO_DATA_FOUND) {// Fetch done; got last column... break; }
file:///H|/0-672-30913-0/vcg03.htm (81 of 116) [14/02/2003 03:06:01 ]
vcg03.htm
if (RC != SQL_SUCCESS && RC != SQL_SUCCESS_WITH_INFO) {// Fetch failed; may (or may not) be fatal! break; } if (RC == SQL_SUCCESS || RC == SQL_SUCCESS_WITH_INFO) {// Fetch was OK; display the results: sprintf(szBuffer, "%20.20s %10.10s %15.15s %15.15s %10.10s %10.10s \n", szQualifier, szOwner, szColName, szTableName, szTypeName, szRemarks); OutputDebugString(szBuffer); } } SQLFreeStmt(hstmt, SQL_CLOSE); SQLFreeStmt(hstmt, SQL_UNBIND); // // END: Get the columns in the specified table: Get data from the table: strcpy(szConStrOut, "SELECT * FROM \""); strcat(szConStrOut, szTable); strcat(szConStrOut, "\" "); RC = SQLExecDirect(hstmt, (unsigned char far *)szConStrOut, SQL_NTS);
vcg03.htm
if (RC != SQL_SUCCESS && RC != SQL_SUCCESS_WITH_INFO) {// Something is wrong; error message, and then DIE! } else {// Bind to whichever columns in result set are needed: SQLBindCol(hstmt, 1, SQL_C_CHAR, (unsigned char far *)szColumn1, sizeof(szColumn1), &sdReturn); SQLBindCol(hstmt, 2, SQL_C_CHAR, (unsigned char far *)szColumn2, sizeof(szColumn2), &sdReturn); SQLBindCol(hstmt, 3, SQL_C_CHAR, (unsigned char far *)szColumn3, sizeof(szColumn3), &sdReturn); SQLBindCol(hstmt, 4, SQL_C_CHAR, (unsigned char far *)szColumn4, sizeof(szColumn4), &sdReturn); // In our example, we will simply get up to 100 rows from the dataset: i = 0; j = 0; while(++j < 100) {// j is the number of rows RC = SQLFetch(hstmt); if (RC == SQL_ERROR || RC == SQL_SUCCESS_WITH_INFO) {// There was a problem! } if (RC == SQL_SUCCESS || RC == SQL_SUCCESS_WITH_INFO) {// Now we have our row's data! Use it (like write a report?) sprintf(szBuffer, "1 '%15.15s' 2 '%15.15s' " "3 '%15.15s' 4 '%15.15s' 5 '%d' \n", szQualifier, szOwner, szColName, szTypeName, i);
vcg03.htm
OutputDebugString(szBuffer); } else {// That's all, folks... No more data here! break; } } } SQLFreeStmt(hstmt, SQL_DROP); SQLDisconnect(hdbc); SQLFreeConnect(hdbc); SQLFreeEnv(henv); } Take a look at the SimpleODBC() function. First, the following code fragment shows what is necessary to initialize the ODBC system.
SQLAllocEnv(&henv); SQLAllocConnect(henv, &hdbc); RC = SQLDriverConnect(hdbc, hWnd, (unsigned char far *)szDSN, SQL_NTS, (unsigned char far *)szConStrOut, sizeof(szConStrOut), (short far *)&nConStrOut, SQL_DRIVER_COMPLETE); if (RC != SQL_SUCCESS && RC != SQL_SUCCESS_WITH_INFO) {// Call whatever error handler your application uses } else {// Connect was successful. Just continue in most cases
vcg03.htm
} RC = SQLAllocStmt(hdbc, &hstmt); Notice how a call is made to SQLAllocEnv() and then a call is made to SQLAllocConnect(). These two calls (which must be made in the order shown) initialize the ODBC environment and allocate the memory necessary for the connection handle. After this setup is performed, it's then possible to connect the actual database to the application. This is done with a call to the SQLDriverConnect() function. This function takes, as arguments, information to let ODBC locate the datasource and the HDBC handle to connect to. After the database has been connected, you need to open a statement handle. This statement handle is used to let the application issue SQL commands to the datasource. At this point, the datasource is truly connected to the application, and the application is able to obtain both information about the datasource and information from the datasource. In the sample program, the next step performed is to obtain information about the table that was opened. SQLColumns() is called to obtain information about each column in the datasource. SQLColumns() returned results are part of a result set. A result set is simply a set of "records" that the application is able to retrieve, either one at a time or in blocks.
// All columns
if (RC != SQL_SUCCESS && RC != SQL_SUCCESS_WITH_INFO) {// Could not determine columns! Call an error handler: } // Now bind variables to columns! SQLBindCol(hstmt, 1, SQLBindCol(hstmt, 2, SQLBindCol(hstmt, 3, SQLBindCol(hstmt, 4, SQLBindCol(hstmt, 5, SQL_C_CHAR, SQL_C_CHAR, SQL_C_CHAR, SQL_C_CHAR, szQualifier, STR_LEN,&cbQualifier); szOwner, STR_LEN, &cbOwner); szTableName, STR_LEN,&cbTableName); szColName, STR_LEN, &cbColName);
vcg03.htm
SQLBindCol(hstmt, 10, SQL_C_SSHORT, &Radix, 0, &cbRadix); SQLBindCol(hstmt, 11, SQL_C_SSHORT, &Nullable, 0, &cbNullable); SQLBindCol(hstmt, 12, SQL_C_CHAR, szRemarks, REM_LEN, &cbRemarks);
In the preceding code fragment, first the SQLColumns() function is called. Then you must bind variables in the application to the columns in the result set that SQLColumns() returns. In this example, you will look at all the columns; however, many applications may need to use only a few of the result set columns (such as getting the name of the column and the column's data type).
// Then get the column names: while(TRUE) {// Do till we break out: RC = SQLFetch(hstmt); if (RC == SQL_SUCCESS || RC == SQL_SUCCESS_WITH_INFO) {// Fetch was OK; display the results: sprintf(szBuffer, "%20.20s %10.10s %15.15s %15.15s %10.10s %10.10s \n", szQualifier, szOwner, szColName, szTableName, szTypeName, szRemarks); OutputDebugString(szBuffer); }
vcg03.htm
} SQLFreeStmt(hstmt, SQL_CLOSE); SQLFreeStmt(hstmt, SQL_UNBIND); This code fragment shows how to get the actual records from the result set. The process, done in a while() loop, is simple and easy to program. A call to SQLFetch() at the beginning of the loop is followed by whatever code is necessary to process the information that SQLFetch() returns. The column names could be added to a list box to let the user select a specific column. After the records from the result set have been processed, you need to discard the result set. You do this by using calls to SQLFreeStmt(), as the following code fragment shows. Notice that you needed to call SQLFreeStmt() two times with different arguments:
SQLFreeStmt(hstmt, SQL_CLOSE); SQLFreeStmt(hstmt, SQL_UNBIND); The first call to SQLFreeStmt() closed the statement handle. The second call was used to actually tell ODBC to discard the result set's contents and release the memory that the result set occupied. To obtain information from a table in a datasource, you must actually issue an SQL command to fetch the desired records. This chapter won't try to detail SQL commands; SQL commands are covered in Chapter 5, "Learning Structured Query Language."
//
Get data from the table: strcpy(szConStrOut, "SELECT * FROM \""); strcat(szConStrOut, szTable); strcat(szConStrOut, "\" "); RC = SQLExecDirect(hstmt, (unsigned char far *)szConStrOut, SQL_NTS);
This example simply creates an SQL statement SELECT * FROM and appends the table name. You need to make sure that the table name is quoted in this example; however, the rules on quoting a name are based on whether the name contains embedded spaces or not. Later in this chapter, you will see an example of a function that quotes names when quotes are needed. Like SQLColumns(), SQLExecDirect() returns the results of the SQL statement as a result set. This result set contains the records from the datasource that the SQL statement has selected (usually limited using a WHERE clause). Because the example simply gets all columns in the datasource, you must either know in advance (perhaps a database was created) or determine (using a call to SQLColumns()) what columns have been included in the result set.
vcg03.htm
Whenever a result set has been returned by an SQL...() function, you must bind variables in your application to the columns in the result set. The SQLFetch() function will place in these variables the data from the current row.
SQLBindCol(hstmt, 1, SQL_C_CHAR, (unsigned char far *)szColumn1, sizeof(szColumn1), &sdReturn); SQLBindCol(hstmt, 2, SQL_C_CHAR, (unsigned char far *)szColumn2, sizeof(szColumn2), &sdReturn); SQLBindCol(hstmt, 3, SQL_C_CHAR, (unsigned char far *)szColumn3, sizeof(szColumn3), &sdReturn); SQLBindCol(hstmt, 4, SQL_C_CHAR, (unsigned char far *)szColumn4, sizeof(szColumn4), &sdReturn); In this example, four columns are bound in the result set. If there are columns in the result set that your application doesn't need, you don't have to bind variables to any columns that you don't want. You can always bind a character variable to a column, and the SQL...() routines will convert the column's data to a character-based format. However, the default conversion for numeric data might not be formatted the way you want. After variables have been bound to columns, the next step is to actually fetch the rows from the result set. These rows are fetched with either SQLFetch() or SQLExtendedFetch(). In the following example, SQLFetch() is called to get the data from the result set.
while(TRUE) { RC = SQLFetch(hstmt); if (RC == SQL_ERROR || RC == SQL_SUCCESS_WITH_INFO) {// There was a problem! } if (RC == SQL_SUCCESS || RC == SQL_SUCCESS_WITH_INFO) {// Now we have our row's data! Use it (like write a report?) } else
vcg03.htm
{// That's all, folks... No more data here! break; } } } The preceding code shows a simple while() loop; the first line in the loop is a call to SQLFetch(). After SQLFetch() completes successfully, you can use the data (which in this example is stored in the variables szBuffer1, szBuffer2, szBuffer3, and szBuffer4). After the results of the result set have been processed by the application and the application is finished with the entire datasource, the statement handle should be discarded, as shown next. It's also necessary to disconnect from the datasource, free the connection handle, and then free the environment handle. These calls are done in the opposite order than when they were created, at the beginning of the sample function.
When calls to SQL...() statements are made, the function will return a return code that should always be examined by the application to determine whether the function was successful or not. When a function fails, the application must determine what failed and, if possible, correct this failure with a minimum amount of interruption to the user's workflow. Except for the most disastrous failures, the application should try to recover without user interaction if possible. When the error problem is so serious that recovery is impossible, the application must notify the user and explain the problem. Don't put a message like Error 0x1003 occurred, program ending in your application. This type of message went out with the 8088. Make sure that the error message provides as much information as possible to assist the user in correcting the problem. For example, Could not open database C:\MSOffice\Access\Samples\NorthWind.MDB, please check and make sure that the database is in this directory is a better message to give to the user. Regardless of how your application "talks" to the user, the SQLError() function will let your application obtain information about the failure. The function shown in Listing 3.2 is an error function that will use the SQLError() function's return values to format an error message.
file:///H|/0-672-30913-0/vcg03.htm (89 of 116) [14/02/2003 03:06:01 ]
vcg03.htm
//
SQLError.C
#include "windows.h" #include "resource.h" #include "odbcmisc.h" #include "sql.h" #include "sqlext.h" #include <stdio.h> #include <stdlib.h> #include <string.h> #include <ctype.h> /****************************************************************************** ** ** ** ** ** ** ** ** ** ** ** ** ** ** AUTHOR: Peter D. Hipson CALLS: ODBC routines: SQL...() RETURNS: YES OUTPUTS: VARIOUS INPUTS: VARIOUS FUNCTION: Open DataBase Connectivity interface code TITLE: SQLError.c
vcg03.htm
** ** ** ******************************************************************************/ // Static, for this module only: // Routines: // A typical SQL type query: void { RETCODE RC; char char char SDWORD SWORD szSqlState[256]; szErrorMsg[256]; szMessage[256]; pfNativeError; pcbErrorMsg; SQLPrintError(HENV henv, HDBC hdbc, HSTMT hstmt) COPYRIGHT 1995 BY PETER D. HIPSON, All rights reserved.
RC = SQLError(henv, hdbc, hstmt, szSqlState, &pfNativeError, szErrorMsg, sizeof(szErrorMsg), &pcbErrorMsg); if (RC == SQL_SUCCESS || RC == SQL_SUCCESS_WITH_INFO) { sprintf(szMessage, "SQL State = '%s', \nMessage = '%s'", szSqlState, szErrorMsg); MessageBox(NULL, szMessage, "ODBC Error...", MB_OK); }
file:///H|/0-672-30913-0/vcg03.htm (91 of 116) [14/02/2003 03:06:01 ]
vcg03.htm
else { MessageBox(NULL, "SQLError() returned an error!!!", "ODBC Error...", MB_OK); } } The SQLPrintError() function shown in Listing 3.2 really isn't that user-friendly. It displays a cryptic message about SQL states and messages without telling the user exactly what went wrong and what to do about the failure. A better example of an SQLPrintError() function would be a function that actually parsed the error condition returned by SQLError() and then offered both the reason and possible corrective actions. A hot link to WinHelp with a help file that contained more detailed information about both the failure and possible corrective action wouldn't be out of order. Of course, adding a help button would require writing a custom replacement for MessageBox(); however, that wouldn't be too difficult if you used the Visual C++ resource editor.
Quoting Names
When an SQL statement is built in a string, it's necessary to quote some names. Any name that contains embedded blanks must be quoted (column names often have embedded blanks, as in 'Zip Code'). The function QuoteName() shown in Listing 3.3 is a function that will quote a string if the string contains a character that isn't a letter, number, or an underscore. This function takes two parameters: a pointer to a buffer containing the string and the size of the buffer. This size parameter isn't the length of the string but the size of the memory allocated to the buffer. The size is needed so that the function can determine whether there is enough room to add the two quote characters to the string. Listing 3.3. QUOTES.C: Puts quotes around a string.
//
QUOTES.C
#include "windows.h" #include "resource.h" #include "odbcmisc.h" #include "sql.h" #include "sqlext.h"
file:///H|/0-672-30913-0/vcg03.htm (92 of 116) [14/02/2003 03:06:01 ]
vcg03.htm
#include <stdio.h> #include <stdlib.h> #include <string.h> #include <ctype.h> /****************************************************************************** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ******************************************************************************/ // Static, for this module only: // Routines: BOOL QuoteName( char * szName,
file:///H|/0-672-30913-0/vcg03.htm (93 of 116) [14/02/2003 03:06:01 ]
TITLE: Quotes.c
INPUTS: VARIOUS
OUTPUTS: VARIOUS
RETURNS: YES
vcg03.htm
int {
nMaxLength)
// This function will enclose, in quotes, an SQL name if it contains // a character that is not alphabetic, numeric, or an underscore int i; BOOL bMustQuote = FALSE; for (i = 0; i < (int)strlen(szName); i++) { if(!__iscsym(szName[i])) { bMustQuote = TRUE; } } if (bMustQuote) {// Had a special character! if((int)strlen(szName) + 2 > nMaxLength) {// Error: No room for quotes! bMustQuote = FALSE; } else {// Quote this string... memmove(&szName[1], &szName[0], strlen(szName) + 1); szName[0] = '"'; strcat(szName, "\""); } }
vcg03.htm
return(bMustQuote); } Calling QuoteName() makes it easy to pass correct SQL commands because if a name that must be quoted doesn't have quotes, the SQL command will fail.
It's necessary to get the datasource name from the user. You can do this by using a simple set of C++ functions (which are callable from C code). The final part of this section shows the GetODBC() function that is called to get the datasource and table names. This code is written in two parts. The first part is the main calling routine, GetODBC(), which is in the file GETODBC.CPP. This file is shown in Listing 3.4. Listing 3.4. GETODBC.CPP: The GetODBC() function.
//
GETODBC.CPP
#include "stdafx.h" #include <afxdb.h> // Include *YOUR* application header here (instead of application.h): #include "application.h" #include "odbcmisc.h" #include "sql.h" #include "sqlext.h" #include <stdio.h> #include <stdlib.h> #include <string.h> #include <commdlg.h> #include "odbctabl.h" #include "odbcinfo.h" /******************************************************************************
file:///H|/0-672-30913-0/vcg03.htm (95 of 116) [14/02/2003 03:06:01 ]
vcg03.htm
** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ******************************************************************************/ // Static, for this module only: // Functions (may be shared): // Shared data objects, not otherwise allocated: // Routines: BOOL GetODBC(
file:///H|/0-672-30913-0/vcg03.htm (96 of 116) [14/02/2003 03:06:01 ]
INPUTS: VARIOUS
OUTPUTS: VARIOUS
RETURNS: YES
vcg03.htm
if (szDataSource) szDataSource[0] = '\0'; if (szDataTable) szDataTable[0] = '\0'; if (szDataDir) szDataDir[0] = '\0'; bReturnCode = COdbcInfo.GetInfo(); // A little debugging output for the programmer! TRACE("At the GetODBC() end: return %d Datasource " "'%s' table '%s' datadir '%s'\n", bReturnCode, (const char *)COdbcInfo.m_DataSourceName, (const char *)COdbcInfo.m_DataTableName, (const char *)COdbcInfo.m_DataTableDir); if (bReturnCode && szDataSource != NULL) {// User wants the datasource name if (COdbcInfo.m_DataSourceName.GetLength() < nDataSourceSize)
vcg03.htm
{ strcpy(szDataSource, COdbcInfo.m_DataSourceName); } else { szDataSource[0] = '\0'; bReturnCode = FALSE; } } if (bReturnCode && szDataTable != NULL) {// User wants the datatable name if (COdbcInfo.m_DataTableName.GetLength() < nDataTableSize) { strcpy(szDataTable, COdbcInfo.m_DataTableName); } else { szDataTable[0] = '\0'; bReturnCode = FALSE; } } if (bReturnCode && szDataDir != NULL) {// User wants the datatable directory name if (COdbcInfo.m_DataTableDir.GetLength() < nDataTableSize) { strcpy(szDataDir, COdbcInfo.m_DataTableDir); }
vcg03.htm
else { szDataDir[0] = '\0'; bReturnCode = FALSE; } } // Finally, return either success or failure. return(bReturnCode); } The GetODBC() function creates an object of class ODBCInfo. This class is in the file ODBCINFO.CPP, which is shown in Listing 3.5. Notice that even though a C++ class is used, the CDatabase class is used in this function. This is because CDatabase offers integrated functionality to the program. This functionality could also have been incorporated by using the SQL...() functions if desired. Listing 3.5. ODBCINFO.CPP: Get ODBC datasource information.
//
ODBCINFO.CPP
#include "stdafx.h" // After stdafx.h, include the application's main .H file. // The application's resources must have the IDD_SELECT_ODBC_TABLE dialog // defined! (save ODBC.RC, and copy using the resource editor and clipboard) #include "APPLICATION.h" #include <afxdb.h> #include "sql.h" #include "sqlext.h" #include "odbcinfo.h" #include <stdio.h> #include <stdlib.h> #include <string.h>
vcg03.htm
TITLE: ODBCINFO.CPP
INPUTS: VARIOUS
OUTPUTS: VARIOUS
RETURNS: YES
vcg03.htm
// Static, for this module only: // Shared data objects, not otherwise allocated: // Routines: // Construction ODBCInfo::ODBCInfo() {// Initialize variables, etc. // // // This sets the default table types that will be displayed. To not display a table type at startup time, change the appropriate variable to FALSE. m_Synonyms = TRUE; m_SystemTables = TRUE; m_Tables = TRUE; m_Views = TRUE; } BOOL ODBCInfo::GetInfo() { HSTMT char SWORD CString RETCODE int TRY { if (!m_CDodbc.Open(m_TableInfo)) {// User selected Cancel. Go home. No more playing for 'im. hstmt = SQL_NULL_HSTMT; szConStrOut[256]; swReturn; CDBType; RC; nReturnCode = TRUE;
vcg03.htm
return(FALSE); } } CATCH(CDBException, e) {// User probably hit Return w/o selecting datasource! Msg and return! return FALSE; } END_CATCH m_DatabaseName = m_CDodbc.GetDatabaseName(); m_Connect = m_CDodbc.GetConnect(); m_CanUpdate = m_CDodbc.CanUpdate(); m_CanTransact = m_CDodbc.CanTransact(); // // // C++'s MFC CRecordSet() class is a bit too unflexible to really work well with an undefined database. Therefore, we simply break into the older API (SQL...()) calls RC = SQLGetInfo(m_CDodbc.m_hdbc, SQL_DATA_SOURCE_NAME, szConStrOut, sizeof(szConStrOut), &swReturn); m_DataSourceName = szConStrOut; // Lines below are simply for debugging (nice to see what happened): // // // // // // // // SQLGetInfo(m_CDodbc.m_hdbc, SQL_DRIVER_VER, szConStrOut, SQLGetInfo(m_CDodbc.m_hdbc, SQL_DRIVER_NAME, szConStrOut, sizeof(szConStrOut), &swReturn); TRACE("Driver Name %s\n", szConStrOut); TRACE("Datasoure: '%s'\n", m_DataSourceName);
vcg03.htm
// // // // // // // // // // // // // // // // // // // // //
SQLGetInfo(m_CDodbc.m_hdbc, SQL_DBMS_VER, szConStrOut, sizeof(szConStrOut), &swReturn); TRACE("DBMS Version %s\n", szConStrOut); Once a datasource is provided, we need to get the TABLE that the user will want. If the datasource is a text file, we use a CFileDialog object (with modifications...) SQLGetInfo(m_CDodbc.m_hdbc, SQL_DBMS_NAME, szConStrOut, sizeof(szConStrOut), &swReturn); CDBType = szConStrOut; if (CDBType == "TEXT") {// Data type is text. Use common dialog support to open file. // This code will break under Windows 95's new Explorer system,
vcg03.htm
// // //
which does not support common dialog template modifications in the same manner as Windows 3.x and Windows NT! It forces usage of "old style" dialog box! CString Filter; Filter = "CSV Files (*.csv)|*.csv|" "TAB Files (*.tab)|*.tab|" "Text Files (*.txt)|*.txt|" "Data Files (*.dat)|*.dat||"; CFileDialog dlg(TRUE, "txt", NULL, OFN_FILEMUSTEXIST | OFN_HIDEREADONLY, (const char *)Filter);
//
Patch to use our dialog box template: dlg.m_ofn.hInstance = AfxGetInstanceHandle(); dlg.m_ofn.lpTemplateName = MAKEINTRESOURCE(TABLESELECT); dlg.m_ofn.Flags |= OFN_ENABLETEMPLATE; if (dlg.DoModal() == IDOK) { m_DataTableName = dlg.GetPathName(); int nPosition = m_DataTableName.ReverseFind('\\'); if (nPosition > 0) { m_DataTableDir = m_DataTableName.Left(nPosition); m_DataTableName = m_DataTableName.Mid(nPosition + 1); } } else
vcg03.htm
{ nReturnCode = FALSE; } } else {// Data type is not text; possibly Access, dBASE, or FoxPro // (but could be others) OdbcTabl OTDlg; OTDlg.m_Synonyms = m_Synonyms; OTDlg.m_SystemTables = m_SystemTables; OTDlg.m_Tables = m_Tables; OTDlg.m_Views = m_Views; OTDlg.m_CDB = &m_CDodbc; if (OTDlg.DoModal() == IDOK) { m_DataTableName = OTDlg.m_TableName; } else { nReturnCode = FALSE; } } // Finally, a successful return return(nReturnCode); } void ODBCInfo::PrintError(HENV henv, HDBC hdbc, HSTMT hstmt)
vcg03.htm
//
RETCODE RC; char char SDWORD SWORD szSqlState[256]; szErrorMsg[256]; pfNativeError; pcbErrorMsg;
RC = SQLError( henv, hdbc, hstmt, (UCHAR FAR*)szSqlState, &pfNativeError, (UCHAR FAR*)szErrorMsg, sizeof(szErrorMsg), &pcbErrorMsg); TRACE("SQL ERROR:\n"); if (RC == SQL_SUCCESS || RC == SQL_SUCCESS_WITH_INFO) { TRACE("%s\n", szSqlState); TRACE("%s\n", szErrorMsg); } else { TRACE("%s\n", "SQLError() returned an error!!!"); } }
vcg03.htm
The ODBCInfo class shows a number of features. First, there is a call to the CDatabase::Open() function to open a datasource. Then you must prompt the user to select a table in the datasource. The call to the CDatabase::Open() function is followed by a number of other CDatabase member function calls, as the following edited code fragment shows:
m_CDodbc.Open(m_TableInfo m_DatabaseName = m_CDodbc.GetDatabaseName(); m_Connect = m_CDodbc.GetConnect(); m_CanUpdate = m_CDodbc.CanUpdate(); m_CanTransact = m_CDodbc.CanTransact(); RC = SQLGetInfo(m_CDodbc.m_hdbc, SQL_DATA_SOURCE_NAME, szConStrOut, sizeof(szConStrOut), &swReturn); m_DataSourceName = szConStrOut SQLGetInfo(m_CDodbc.m_hdbc, SQL_DBMS_NAME, szConStrOut, sizeof(szConStrOut), &swReturn); ; This example shows the call to Open() and then calls to get the database name and the connect string and finds out whether the database can be updated (not all datasources are updatable) and whether the database supports transactions. You also get the datasource name and the DBMS name. After you have the database, you must determine which table in the database the user is working with. Again, this type of information is individual to each application: perhaps there will be a number of predefined tables that will be used, or perhaps the user may have to be prompted to supply information about the table. In my example, I present either a custom dialog box of class OdbcTabl (for databases that aren't text-based) or I use the CFileDialog MFC class, with a custom template, to select which text file the user will be using for the table. The sample function actually has two dialog box templates (see ODBC.RC, included on this book's CD, and Figure 3.1). The ODBCTABL.CPP file is shown in Listing 3.6. This class uses calls to the SQL...() functions to manage the gathering of information about tables in the specified datasource. Figure 3.1. Dialog boxes for ODBCTABL. Listing 3.6. ODBCTABL.CPP: Dialog box allowing users to select a table.
vcg03.htm
// #include "stdafx.h" #include <afxdb.h> // After stdafx.h, include the application's main .H file. // The application's resources must have the IDD_SELECT_ODBC_TABLE dialog // defined! (save ODBC.RC, and copy using the editor) #include "sql.h" #include "sqlext.h" #include "odbctabl.h" #ifdef _DEBUG #undef THIS_FILE static char BASED_CODE THIS_FILE[] = __FILE__; #endif ///////////////////////////////////////////////////////////////////////////// // OdbcTabl dialog OdbcTabl::OdbcTabl(CWnd* pParent /*=NULL*/) : CDialog(OdbcTabl::IDD, pParent) { //{{AFX_DATA_INIT(OdbcTabl) m_Synonyms = FALSE; m_SystemTables = FALSE; m_Tables = FALSE; m_Views = FALSE; //}}AFX_DATA_INIT } void OdbcTabl::DoDataExchange(CDataExchange* pDX) {
vcg03.htm
CDialog::DoDataExchange(pDX); //{{AFX_DATA_MAP(OdbcTabl) DDX_Control(pDX, IDC_SYNONYMS, m_CSynonyms); DDX_Control(pDX, IDC_VIEWS, m_CViews); DDX_Control(pDX, IDC_TABLES, m_CTables); DDX_Control(pDX, IDC_SYSTEM_TABLES, m_CSystemTables); DDX_Control(pDX, IDC_LIST1, m_TableList); DDX_Check(pDX, IDC_SYNONYMS, m_Synonyms); DDX_Check(pDX, IDC_SYSTEM_TABLES, m_SystemTables); DDX_Check(pDX, IDC_TABLES, m_Tables); DDX_Check(pDX, IDC_VIEWS, m_Views); //}}AFX_DATA_MAP } BEGIN_MESSAGE_MAP(OdbcTabl, CDialog) //{{AFX_MSG_MAP(OdbcTabl) ON_BN_CLICKED(IDC_SYNONYMS, OnSynonyms) ON_BN_CLICKED(IDC_SYSTEM_TABLES, OnSystemTables) ON_BN_CLICKED(IDC_TABLES, OnTables) ON_BN_CLICKED(IDC_VIEWS, OnViews) //}}AFX_MSG_MAP END_MESSAGE_MAP() ///////////////////////////////////////////////////////////////////////////// // OdbcTabl message handlers BOOL OdbcTabl::OnInitDialog() { CDialog::OnInitDialog(); m_TableList.SetTabStops(75);
file:///H|/0-672-30913-0/vcg03.htm (109 of 116) [14/02/2003 03:06:02 ]
vcg03.htm
LoadTableList(); return TRUE; } void OdbcTabl::LoadTableList() { HSTMT long int SWORD RETCODE char char char char char char hstmt = SQL_NULL_HSTMT; lReturnLength; i; swReturn; RC; szQualifier[128]; szOwner[128]; szName[128]; szType[128]; szConStrOut[256]; szRemarks[254]; RC = SQLAllocStmt(m_CDB->m_hdbc, &hstmt); if (RC != SQL_SUCCESS && RC != SQL_SUCCESS_WITH_INFO) { TRACE("SQLAllocStmt() FAILED!!!!\n"); PrintError(SQL_NULL_HENV, m_CDB->m_hdbc, hstmt); } RC = SQLTables (hstmt, (unsigned char far *)"%", SQL_NTS, (unsigned char far *)"", 0, (unsigned char far *)"", 0, (unsigned char far *)"", 0); SQLFreeStmt(hstmt, SQL_CLOSE); // Return TRUE unless you set the focus to a control
vcg03.htm
SQLFreeStmt(hstmt, SQL_UNBIND); SQLGetInfo(m_CDB->m_hdbc, SQL_MAX_OWNER_NAME_LEN, &i, sizeof(int), &swReturn); szRemarks[0] = '\0'; if (m_CTables.GetCheck() == 1) { strcat(szRemarks, "'TABLE'"); } if (m_CSystemTables.GetCheck() == 1) { if (strlen(szRemarks) > 0) strcat(szRemarks, ", "); strcat(szRemarks, "'SYSTEM TABLE'"); } if (m_CViews.GetCheck() == 1) { if (strlen(szRemarks) > 0) strcat(szRemarks, ", "); strcat(szRemarks, "'VIEW'"); } if (m_CSynonyms.GetCheck() == 1) { if (strlen(szRemarks) > 0) strcat(szRemarks, ", "); strcat(szRemarks, "'SYNONYM'"); } RC = SQLTables(hstmt,
vcg03.htm
(unsigned char far *)szRemarks, strlen(szRemarks)); if (RC != SQL_SUCCESS && RC != SQL_SUCCESS_WITH_INFO) { PrintError(SQL_NULL_HENV, m_CDB->m_hdbc, hstmt); } SQLGetInfo(m_CDB->m_hdbc, SQL_DBMS_VER, szConStrOut, sizeof(szConStrOut), &swReturn); TRACE("%s\n", szConStrOut); // Now bind variables to columns! RC = SQLBindCol(hstmt, 1, SQL_C_CHAR, szQualifier, sizeof(szQualifier), &lReturnLength); RC = SQLBindCol(hstmt, 2, SQL_C_CHAR, szOwner, sizeof(szOwner), &lReturnLength); RC = SQLBindCol(hstmt, 3, SQL_C_CHAR, szName, sizeof(szName), &lReturnLength); RC = SQLBindCol(hstmt, 4, SQL_C_CHAR, szType, sizeof(szType), &lReturnLength); RC = SQLBindCol(hstmt, 5, SQL_C_CHAR, szRemarks, sizeof(szRemarks), &lReturnLength); // Then get the table names: m_TableList.ResetContent(); while(TRUE) { RC = SQLFetch(hstmt);
file:///H|/0-672-30913-0/vcg03.htm (112 of 116) [14/02/2003 03:06:02 ]
vcg03.htm
if (RC == SQL_ERROR || RC == SQL_SUCCESS_WITH_INFO) { TRACE("SQLFetch() FAILED!!!!\n"); PrintError(SQL_NULL_HENV, m_CDB->m_hdbc, hstmt); } if (RC == SQL_SUCCESS || RC == SQL_SUCCESS_WITH_INFO) { // Must set tab stops for this list to look good! sprintf(szRemarks, "%s\t%s", szType, szName); m_TableList.AddString(szRemarks); } else {// That's all, folks... break; } } m_TableList.SetCurSel(0); SQLFreeStmt(hstmt, SQL_CLOSE); SQLFreeStmt(hstmt, SQL_UNBIND); SQLFreeStmt(hstmt, SQL_DROP); } void OdbcTabl::OnSynonyms() { // TODO: Add your control notification handler code here LoadTableList(); }
vcg03.htm
void OdbcTabl::OnSystemTables() { // TODO: Add your control notification handler code here LoadTableList(); } void OdbcTabl::OnTables() { // TODO: Add your control notification handler code here LoadTableList(); } void OdbcTabl::OnViews() { // TODO: Add your control notification handler code here LoadTableList(); } void OdbcTabl::OnOK() { CString TempString;
// TODO: Add extra validation here m_TableList.GetText(m_TableList.GetCurSel(), TempString); // Get everything after the tab... m_TableName = TempString.Mid(TempString.Find('\t') + 1);
{// Private, programmer's error handler. Outputs to the debugging // terminal and doesn't use a message box!
vcg03.htm
RETCODE RC; char char SDWORD SWORD szSqlState[256]; szErrorMsg[256]; pfNativeError; pcbErrorMsg;
RC = SQLError(henv, hdbc, hstmt, (UCHAR FAR*)szSqlState, &pfNativeError, (UCHAR FAR*)szErrorMsg, sizeof(szErrorMsg), &pcbErrorMsg); TRACE("SQL ERROR:\n"); if (RC == SQL_SUCCESS || RC == SQL_SUCCESS_WITH_INFO) { TRACE("%s\n", szSqlState); TRACE("%s\n", szErrorMsg); } else { TRACE("%s\n", "SQLError() returned an error!!!"); } }
Summary
This chapter introduced you to the ODBC SQL...() functions and provided a reference section. A set of sample routines
file:///H|/0-672-30913-0/vcg03.htm (115 of 116) [14/02/2003 03:06:02 ]
vcg03.htm
provided practical examples of how to use the SQL...() functions and also showed examples of how to use the MFC database objects that were covered in Chapter 2. This chapter completes Part I of this book. You will create more sophisticated examples of decision support and transaction processing applications when you reach Part III, "An Introduction to Database Front-End Design," and Part IV, "Advanced Programming with Visual C++." The next chapter, "Optimizing the Design of Relational Databases," introduces you to database design methodology and shows you some of the CASE design tools that are available for Access and client-server databases.
vcgp2.htm
vcg04.htm
Classifying Database Systems s Database Terminology s Flat-File Databases s The Network and Hierarchical Database Models s The Relational Database Model s Types of Relational Database Managers s Relational SQL Database Management Systems s Three-Tier Client-Server Architecture and LOBjects s Traditional Desktop Relational Database Managers s Microsoft Access: A Hybrid RDBMS Modeling Data s Database Diagrams s Using Modeling Tools for Database Design Rules for Relational Database Design s Organizing Entity Classes s Normalizing Table Data s First Normal Form s Second Normal Form s Third Normal Form s Over-Normalizing Data and Performance Considerations s Fourth Normal Form s Fifth Normal Form Indexing Tables for Performance and Domain Integrity s Table Indexing Methods s Records and Data Pages s Balanced-Tree Indexes s Choosing Fields to Index Summary
vcg04.htm
vcg04.htm
decks could be collated into the two-level deck to create more detailed reports. Early tabulating machines could create customer subtotals and order grand totals. Figure 4.1 shows the effect of collating two decks of cards to print a report of invoices. Figure 4.1. Collating punched cards for customers and invoices in order to print a report. The obvious problem with the punched-card collation technique was that every time you wanted a different report, you had to separate (decollate) the cards back into their original decks, then manually run a different set of collation processes. Replacing tabulating machines with computers equipped with ninetrack magnetic tape drives solved many of the problems associated with moving decks of cards back and forth between collators. You transferred the data from a sorted deck of cards to a magnetic tape and then mounted the tapes you needed on tape drives and let the computer combine (merge) the data from the "table" tapes onto a new tape whose data was identical to that of a collated deck of punched cards. Now you could print a report from the data on the newly recorded tape. Punched-card decks and magnetic tapes are sequential devices. Finding a particular record requires that you begin searching from the first card of the deck or the first record of a tape and read each card or record until you find a match (or determine that no matching record exists). When high-capacity randomaccess data storage devices (such as disk drives) became available, searching for a particular record became much faster, even if you had to read each record in the table. To speed the process, sorting and indexing methods were developed to minimize the number of records that the computer had to read until it found the matching data. The seminal work in this field was Volume 3 of Stanford University Professor Donald E. Knuth's Art of Computer Programming series, Sorting and Searching, (Addison-Wesley, 1973). It's still in print today. Advances in computer technology after the advent of the random-access disk drive have occurred primarily in the form of architectural, rather than conceptual, changes to both hardware and software. The pace of improvement in the operating speed and the rate of cost reduction of computer hardware has far out-distanced the rate of progress in software engineering, especially in database design and programming methodology. You can substantially improve the performance of an ill-conceived and poorly implemented database design simply by acquiring a faster computer. The price of a new computer is usually much less than the cost of re-engineering the organization's legacy database structure. Ultimately, however, poor database designs and implementations result in a severe case of organizational inefficiency. One of the purposes of this chapter is to provide a sufficient background in database design to make sure that the database structures you create don't fall into this category. The following sections discuss elementary database terminology in the object-oriented language to which you were introduced in Chapter 2, "Understanding MFC's ODBC Database Classes." Then you will learn about the different types of computer database structures used today.
Database Terminology
file:///H|/0-672-30913-0/vcg04.htm (3 of 31) [14/02/2003 03:06:07 ]
vcg04.htm
Whatever data storage and retrieval mechanism is used, the fundamental element of a database is a table. A table is a database object that consists of a collection of rows (records) that have an identical collection of properties. The values associated with the properties of a table appear in columns (fields). Row-column (spreadsheet) terminology is most commonly used in databases that employ SQL statements to manipulate table data; desktop databases commonly use record-field terminology. This book uses the terms record and field when referring to persistent database objects (Tabledef objects) and row and column when referring to virtual tables (Recordset objects) created from Tabledef and Querydef objects. The distinction between the two forms of terminology, however, isn't very significant. Figure 4.2 illustrates the generalized structure of a database table. Figure 4.2. The generalized structure of a database table.
NOTE The object hierarchy of the Microsoft Jet 1.x database engine supplied with Visual Basic 3.0 and Access 1.x included a Table object, a member of the Tables collection. The Table object and Tables collections don't appear in the Microsoft DAO 3.0 Object library or the Microsoft DAO 2.5/3.0 Compatibility library. Visual C++ 4.0 supports operations on Table objects, such as the OpenTable method, for backward compatibility with Jet 1.x code.
The following list describes the most important properties of database table objects. These property descriptions and rules apply to tables of conventional databases that use only fundamental data types: character strings and numerical values.
q
A record is a representation of a real-world object, such as a person, a firm, an invoice, or one side of a transaction involving money. A record of a table is the equivalent of one punched card in a deck. In formal database terminology, a row or record is an entity. Synonyms for entity include data entity, data object, data instance, and instance. Tables are the collection (set) of all entities of a single entity class; statisticians call a table a homogeneous universe. A field describes one of the characteristics of the objects represented by records. A field corresponds to a column of spreadsheets. The intersection between a row and a column is called an attribute and represents the value of a significant property of a real-world object. Attributes are also called cells and data cellsterms derived from spreadsheet applications. All the attributes contained in a single column of a table are called an attribute class.
vcg04.htm
The fundamental rule of all table objects is that each field is devoted to one and only one property. (This rule is implied by the terms attribute and attribute class.) Attribute values are said to be atomic, used here as a synonym for indivisible. Each field is assigned a field name that is unique within the table. A Name field that contains entries such as "Dr. John R. Jones, Jr." isn't atomic; the field actually consists of five attributes: title, first name, middle initial, last name, and suffix. You can sacrifice the atomicity of fields to a limited degree without incurring serious problems. It's common practice to combine first name and middle initial, and sometimes the suffix, into a single field. It's desirable, but not essential, for each record in a table to have a set of attributes by which you can uniquely distinguish one record in the table from any other record. This set is called the entity identifier or identifier. In some cases, such as tables that contain the line items of invoices, records need not have a unique identifier. It's good database design practice, however, to provide such an identifier, even if you have to add an item number attribute class to establish the uniqueness. The fields that include the identifier attributes are called the primary key or primary key fields of the table. By definition, the set of attribute values that make up the primary key must be unique for each record. Records in related tables are joined by values of the primary key in one table and by equal values in the primary key or the foreign keys of the other table. A foreign key is one or more attributes that don't constitute the primary key of the table. These attributes connect the record with another record in the same table or a different table. Tables that contain records identifying objects that are inherently unique, such as human beings, are called primary or base tables. Each of the records of a primary table must have one or more attributes that uniquely identify the entity. Theoretically, all U.S. citizens (except newborns) have a unique Social Security number that is used for identification purposes. Thus, an employee table in the U.S. should be able to use a single attributeSocial Security numberto serve as an entity identifier. A duplicate Social Security number usually indicates either a data entry error or a counterfeit Social Security card. Tables are logical constructs; that is, they don't need to be stored on a disk drive in tabular format. Traditional desktop database managers, such as dBASE and Paradox, have a file structure that duplicates the appearance of the table. However, most mainframe and client-server database management systems store many tables (and even more than one database) within a single database file. Microsoft Access 1.0 was the first widely accepted desktop RDBMS to store all the tables that constitute a single database in one Jet .MDB file. There is no easily discernible relationship between the physical and logical structures of tables of mainframe, client-server, and Jet database types. dBASE II introduced the concept of a record number to the world of PC databases. The record number, returned by xBase's RECNO() function, is an artificial construct that refers to the relative physical position (called the offset) of a record in a table file. Record numbers change when you physically reorder (sort) the table file. Record number isn't an attribute of a table unless you create
vcg04.htm
a field and add record number values (often called a record ID field). Record numbers that appear in Access 95's equivalent of the Data control (navigation buttons) are generated by Access, not by the Jet database engine.
NOTE Online help defines a base table as "a table in a Microsoft Jet database." This is a Jet-centric definition of base table, not the definition commonly used in RDBMS circles.
A database consists of one or more tables; if more than one table is included in a database, the entities described in the tables must be related by at least one attribute class (field) that is common to two of the tables. The formal statistical name for a database is heterogeneous universe; a recently coined alternative term is universe of discourse. This book adheres to the term database. Object-oriented databases (OODBs) don't conform strictly to the atomicity rules for attributes. Similarly, the OLE Object field data type of Jet databases isn't atomic, because the data in fields of the OLE Object field data type contains both the object's data and a reference to the application that created the object. The object type of the contents of Jet OLE Object fields may vary from record to record. A query is a method by which you obtain access to a subset of records from one or more tables that have attribute values satisfying one or more criteria. There are a variety of ways to process queries against databases. Processing queries against databases with the Jet database engine is the subject of Chapter 5. You also can use queries to modify the data in tables. This type of query is described in Chapter 8, "Running Crosstab and Action Queries."
Flat-File Databases
The simplest database form consists of one table with records having enough columns to contain all the data you need in order to describe the entity class. The term flat file is derived from the fact that the database itself is two-dimensional. The number of table fields determines the database's width, and the quantity of table records specifies its height. There are no related tables in the database, so the concept of data depth, the third dimension, doesn't apply. Any database that contains only one table is, by definition, a flat-file database, if the database requires that the tables be flat. (Relational databases, for example, require flat tables.) Flat-file databases are suitable for simple telephone and mailing lists. Windows 3.x's Cardfile is an
file:///H|/0-672-30913-0/vcg04.htm (6 of 31) [14/02/2003 03:06:07 ]
vcg04.htm
example of a simple flat-file database designed as a telephone list. Ranges of cells, which are designated as "databases" by spreadsheet applications, also are flat files. A mailing list database, for example, has designated fields for names, addresses, and telephone numbers. Data files used in Microsoft Word's print merge operations constitute flat-file databases. You run into problems with flat-file databases when you attempt to expand the use of a mailing list database to include, for example, sales contacts. If you develop more than one sales contact at a firm, there are only two ways to add the data for the new contact:
q
Add a new record with duplicate data in all fields except the contact and, perhaps, the telephone number field. Add new fields so that you can have more than one contact name and telephone number field per record. In this case, you must add enough contact field pairs to accommodate the maximum number of contacts you expect to add for a single firm. The added fields are called repeating groups.
Neither of these choices is attractive, because both choices are inefficient. Both methods can waste a considerable amount of disk space, depending on the database file structure you use. Adding extra records duplicates data, and adding new fields results in many records that have no values (nulls) for multiple contact and telephone number fields. Adding new fields causes trouble when you want to print reports. It's especially difficult to format printed reports that have repeating groups. Regardless of the deficiencies of flat-file databases, many of the early mainframe computers offered only flat-file database structures. All spreadsheet applications offer "database" cell ranges that you can sort using a variety of methods. Although spreadsheet "databases" appear to be flat, this is seldom truly the case. One of the particular problems with spreadsheet databases is that the spreadsheet data model naturally leads to inconsistencies in attribute values and repeating groups. Time-series data contained in worksheets is a classic example of a repeating group. The section "Organizing Entity Classes" shows you how to deal with inconsistent entity classes that occur in worksheet "databases," and the section "Normalizing Table Data" describes how to eliminate repeating groups.
The inability of flat-file databases to efficiently deal with data that involved repeating groups of data led to the development of a variety of different database structures (called models) for mainframe computers. The first standardized and widely accepted model for mainframe databases was the network model, developed by the Committee for Data System Languages (CODASYL), which also developed Common Business-Oriented Language (COBOL) to write applications that manipulate the data in CODASYL
file:///H|/0-672-30913-0/vcg04.htm (7 of 31) [14/02/2003 03:06:07 ]
vcg04.htm
network databases. Although the CODASYL database model has its drawbacks, an extraordinary number of mainframe CODASYL databases remain in use today. There are billions of lines of COBOL code in everyday use in North America. CODASYL databases substitute the term record type for table, but the characteristics of a CODASYL record type are fundamentally no different from the properties of a table. CODASYL record types contain pointers to records of other record types. A pointer is a value that specifies the location of a record in a file or in memory. For example, a customer record contains a pointer to an invoice for the customer, which in turn contains a pointer to another invoice record for the customer, and so on. The general term used to describe pointer-based record types is linked list; the pointers link the records into an organized structure called a network. Network databases offer excellent performance when you're seeking a set of records that pertain to a specific object, because the relations between records (pointers) are a permanent part of the database. However, the speed of network databases degrades when you want to browse the database for records that match specific criteria, such as all customers in California who purchased more than $5,000 worth of product "A" in August 1995. The problem with CODASYL databases is that database applications (primarily COBOL programs) need to update the data values and the pointers of records that have been added, deleted, or edited. The need to sequentially update both data and pointers adds a great deal of complexity to transaction-processing applications for CODASYL databases. IBM developed the hierarchical model for its IMS mainframe database product line, which uses the DL/1 language. The hierarchical model deals with repeating groups by using a data structure that resembles an upside-down tree: Data in primary records constitutes the branches, and data in repeating groups makes up the leaves. The advantage of the hierarchical model is that the methods required to find related records are simpler than the techniques needed by the network model. As with the CODASYL model, a large number of hierarchical databases are running on mainframe computers today.
The relational database model revolutionized the database world and allowed PCs to replace expensive minicomputers and mainframes for many database applications. The relational database model was developed in 1970 by Dr. E. F. Codd of IBM's San Jose Research Laboratories. The primary advantage of the relational model is that there is no need to mix pointers and data in tables. Instead, records are linked by relations between attribute values. A relation consists of a linkage between records in two tables that have identical attribute values. Figure 4.3 illustrates relations between attribute values of relational tables that constitute part of a sales database. Figure 4.3. Relationships between tables in a sales database.
vcg04.htm
Because relational tables don't contain pointers, the data in relational tables is independent of the methods used by the database management system to manipulate the records. A relational database management system is an executable application that can store data in and retrieve data from sets of related tables in a database. The RDBMS creates transitory virtual pointers to records of relational tables in memory. Virtual pointers appear when they are needed to relate (join) tables and are disposed of when the database application no longer requires the relation. The "joins" between tables are shown in Figure 4.3. Joins are created between primary key fields and foreign key fields of relational tables. The primary and foreign key fields of the tables in Figure 4.3 are listed in Table 4.1.
Table 4.1. The primary and foreign keys of the tables shown in Figure 4.3. Table Customers Invoices Primary Key Cust# Inv# Foreign Key None Cust#
Invoice Items Inv# and Prod# Inv# Relational databases require duplicate data among tables but don't permit duplication of data within tables. You must duplicate the values of the primary key of one table as the foreign key of dependent tables. A dependent table requires a relationship with another table to identify its entities fully. Dependent tables often are called secondary or related tables. For example, the Invoices table is dependent on the Customers table to supply the real-world name and address of the customer represented by values in the Cust# field. Similarly, the Invoice Items table is dependent on the Invoices table to identify the real-world object, in this case an invoice, to which records are related. Three types of relations are defined by the relational database models:
q
One-to-one relations require that one and only one record in a dependent table relate to a record in a primary table. One-to-one relations are relatively uncommon in relational databases. One-to-many relations let more than one dependent table relate to a record in a primary table. The term many-to-one is also used to describe one-to-many relations. One-to-many relations constitute the relational database model's answer to the repeating-groups problem. Repeating groups are converted to individual records in the table on the "many" side of the relation. One-to-many relations are the most common kind of relations. Many-to-many relations aren't true relations, because many-to-many relations between two tables require an intervening table, called a relation table, to hold the values of the foreign keys. (Relational-database theory only defines relations between two tables.) If Figure 4.3 had included a Products table to describe the products represented by the Prod# field of the Invoice Items table,
vcg04.htm
the Invoice Items table would serve as a relation table between the Invoices and Products tables. Some relation tables include only foreign key fields.
TIP One situation in which a one-to-one relationship is useful is for an employees table in which the employees' names, addresses, and telephone numbers need to be available to many database users, but information about salaries, benefits, and other personal information needs to be restricted on a need-toknow basis. Databases such as Jet don't provide column-level permissions, so you create a one-to-one relationship between the table that contains the nonconfidential data and the one that contains confidential information. Then, you grant read-only permission to everyone (the users group) for the nonconfidential table and grant permission to only a limited number of users for the confidential table.
The proper definition of the relations between entity classes and the correct designation of primary and foreign keys constitute the foundation of effective relational database design methods. The relational database model is built on formal mathematical concepts embedded in relational algebra. Fortunately, you don't need to be a mathematician to design a relational database structure. A set of five rules, discussed in the section "Normalizing Table Data," defines the process of creating tables that conform to the relational model.
The preceding description of the relational database model made the important point that the properties of (such as the data in) a relational table object are independent of the methods used to manipulate the data. This means that you can use any relational database management application to process the data contained in a set of relational tables. For example, you can export the data in the tables of an IBM DB2 mainframe database as a set of text files that preserve the tables' structure. You can then import the text files into tables created by another database management system. Alternatively, you can use Jet, an ODBC driver for DB2, and a network gateway to the DB2 database to access the data directly. The independence of data and implementation in relational databases also lets you attach tables from one database type to another. You can join the attached tables to the native tables in your Jet database without going through the export-import exercise. Thus, you can design a relational database that can be implemented with any relational database manager.
vcg04.htm
TIP Relational database managers differ in the types of data that you can store in tables and in how you name the fields of tables. Many RDBMSs, such as SQL Server, include the long varbinary field data type, which lets you store image data in tables; others, including the most commonly used versions of IBM's DB2, don't support long varbinary fields or their equivalent. You can embed spaces and other punctuation symbols in Jet table and field names, but you can't in most other RDBMS tables. If you're designing a database that may be ported from the original RDBMS to another relational database implementation, make sure you use only the fundamental field data types and conform to the table- and field-naming conventions of the least versatile of the RDBMSs.
There are, to be sure, substantial differences in how relational database systems are implemented. These differences are often overlooked by people new to database management or people converting from a mainframe database system to a desktop database manager. The following sections discuss how mainframe, minicomputer, and client-server databases differ from traditional desktop database managers.
Full-featured client-server relational database management systems separate the database management application (server or back end) from the individual (client) applications that display, print, and update the information in the database. Client-server RDBMSs, such as Microsoft SQL Server 6.0, run as a process on the server computer. Most client-server systems in use today run under one or more flavors of the UNIX operating system, but Windows NT 3.5+ rapidly is gaining ground on UNIX as an application server operating system. The client-server RDBMS is responsible for the following activities:
q
Creating new databases and one or more files to contain the databases. (Several databases may reside in a single fixed-disk file.) Implementing database security to prevent unauthorized people from gaining access to the database and the information it contains. Maintaining a catalog of the objects in the database, including information on the owner (creator) of the database and the tables it contains.
vcg04.htm
q
Generating a log of all modifications made to the database so that the database can be reconstructed from a backup copy combined with the information contained in the log (in the event of a hardware failure). Usually preserving referential integrity, maintaining consistency, and enforcing domain integrity rules to prevent corruption of the data contained in the tables. Most client-server RDBMSs use preprogrammed triggers that create an error when an application attempts to execute a query that violates the rules. Managing concurrency issues so that multiple users can access the data without encountering significant delays in displaying or updating data. Interpreting queries transmitted to the database by user applications and returning or updating records that correspond to the criteria embedded in the query statement. Virtually all client-server RDBMSs use statements written in SQL to process queriesthus the generic name "SQL RDBMS." Often executing stored procedures, which are precompiled queries that you execute by name in an SQL statement. Stored procedures speed the execution of commonly used queries by eliminating the necessity of the server to optimize and compile the query.
Separate database applications (front ends) are responsible for creating the query statements sent to the database management system and for processing the rows of data returned by the query. Front ends, as mentioned in Chapter 1, "Positioning Visual C++ in the Desktop Database Market," handle all of the data formatting, display, and report-printing chores. One of the primary advantages of using an SQL RDBMS is that the features in the preceding list, such as security and integrity, are implemented by the RDBMS itself. Thus, the code to implement these features doesn't need to be added to each different front-end application. Chapter 20, "Creating Front Ends for Client-Server Databases," describes the features of client-server RDBMSs in greater detail.
The stored procedures of client-server databases used to execute predefined queries and maintain database integrity use SQL, plus proprietary SQL language extensions such as Transact-SQL, used by Microsoft and Sybase SQL Server, and Sybase System 10+. SQL is a set-oriented, not procedural, programming language. Thus, dialects of SQL aren't well suited to writing programs for validating data in accordance with complex business rules. Here's an example of a complex business rule: "The current credit limit of a customer is equal to the customer's maximum credit limit, less uncontested open invoices and orders in process, unless the customer has outstanding, uncontested invoices beyond terms plus 10 days, or if the total amount of contested invoices exceeds 50 percent of the total amount of open invoices.
file:///H|/0-672-30913-0/vcg04.htm (12 of 31) [14/02/2003 03:06:07 ]
vcg04.htm
If a pending order exceeds the customer's calculated credit limit or any customer payment behind terms plus 10 days, approval must be obtained from the credit manager before accepting the order." Such a test is quite difficult to program as an SQL stored procedure, because obtaining the credit manager's approval would be difficult for the SQL stored procedure. Three-tier client-server architecture adds a processing layer between the front-end client and the back-end server. This processing layer, often called a line-of-business object (LOBject), processes requests from client applications, tests the requests for conformance with programmed business rules, and sends conforming requests to the back-end RDBMS, which updates the affected tables. Each client application using the LOBject creates its own instance of the RAO (Remote Automation Object). Figure 4.4 illustrates the architecture of a three-tier client-server application that uses Microsoft Mail 3.5 to process credit approvals (or rejections) for the scenario described in the preceding paragraph. Figure 4.4. A three-tier client-server database system for implementing a credit management LOBject.
Traditional desktop database managers, such as dBASE and Paradox, combine their database management features with the interpreter or compiler that executes the application's source code. The early versions of these products let you create quite sophisticated database applications that would run on PCs with only 256K of RAM. The constraints of available RAM in the early days of the PC required that the database management portion of the desktop DBM's code include only features that were absolutely necessary to make the product operable. Thus, products of this type, which also include early versions of FoxPro and Clipper for DOS, don't truly qualify as full-fledged relational database management systems; they are more properly termed database file managers. You implement the "relational" and the "management" features through the application source code you write. The components of the dBASE and Paradox DBMs that manipulate the data contained in individual tables don't provide the built-in features of true relational database management systems listed in the preceding section. (The exception is the desktop products' capability to create a file that contains one table.) You need to write application code to enforce referential and domain integrity (however, Paradox for Windows will enforce referential integrity), and a one-DOS-file-per-table system doesn't lend itself to establishing secure databases. The commercial success of dBASE (especially dBASE III+) and Paradox for DOS created a user base of perhaps six million people. (Borland claims there are four million dBASE users worldwide, about the same number of copies of Microsoft Access that had been sold when this book was written.) Thus, dBASE and Paradox product upgrades need to be backwardly compatible with tens of millions of .DBF
file:///H|/0-672-30913-0/vcg04.htm (13 of 31) [14/02/2003 03:06:07 ]
vcg04.htm
and .DB files and billions of lines of dBASE and PAL code. New features, such as file and record locking for multiuser applications and the capability to use SQL to create queries, are add-ins (or tack-ons) to the original DBM. Thus, both dBASE and Paradox are currently losing market share to relatively low-cost client-server RDBMSs such as Microsoft SQL Server, Microsoft Access, a hybrid of the desktop DBM and the full-featured RDBMS, and Visual C++.
Microsoft Access is a cross between a conventional desktop DBM and a complete, full-featured RDBMS. Access uses a single database file that includes all the tables that are native to the database. Access's Jet database engine enforces referential integrity for native tables at the database level, so you don't need to write Access Basic (Access 1+) or VBA (Access 95) code to do so. Jet enforces domain integrity at the field and table level when you alter the value of a constrained field. Jet databases include system tables that catalog the objects in the database, and the database drivers handle concurrency issues. Access lets you break the back end/front end barrier that separates RDBMSs from desktop DBMs. Application code and objects, such as forms and reports, can be included in the same database file as the tables. Microsoft used Access's ability to include both front-end and back-end components in a single .MDB file as a strong selling point. It soon became apparent to Access developers that separating application and database objects into individual .MDB files was a better design. You create a Jet database that contains only tables and attach the tables to an Access .MDB file that provides the front-end functionality. User names and passwords are stored in a separate .MDW workgroup library file. This is necessary because a Jet .MDB file contains only one database. Here again, you can put sets of unrelated tables in a single .MDB file. Jet's flavor of SQL (which is proprietary, as are most other implementations of SQL) is the native method of manipulating data in tables, not an afterthought. The Jet DLLs that implement the database functionality are independent of the MSACCESS.EXE file, which includes the code you use to create forms and reports. Jet databases are about as close as you can get to an RDBMS in a low-cost, mass-distributed software product. Using Jet .MDB database files with Visual C++ front ends approximates the capabilities and performance of client-server RDBMSs at a substantially lower cost for both hardware and software. If you're currently using one-file-per-table DBMs, consider attaching the tables to a Jet database during the transition stage while both your new Windows front ends and your present DBM or old character-based DOS applications need to simultaneously access the tables. Once the transition to Visual C++ front ends is complete, you can import the data to a Jet database and take full advantage of the additional features that .MDB files offer. If you outgrow the Jet database structure, it's a quick and easy port to SQL Server 6.0 for Windows NT 3.5 using the Microsoft SQL Server ODBC driver.
vcg04.htm
NOTE You no longer need a copy of Access in order to take advantage of Access's built-in enforcement of referential integrity and the security features of Jet databases. Visual C++ 4.0 and the Jet 3.0 Data Access Object now provide programmatic implementation of referential integrity and security. The Jet database engine and the 32-bit Jet ODBC 2.0 driver now support the SQL FOREIGN KEY and REFERENCES reserved words to define relationships during table creation. However, don't yet implement the SQL-92 CHECK reserved word, which lets you enforce domain integrity with ranges or lists of values that constrain attribute values. If you're seriously into database development with Jet 3.0 databases, purchasing a copy of Access for Windows 95 (sometimes called Access 7, and referred to as Access 95 in this book) quickly repays your investment, because Access lets you establish relationships in a graphic relationships window, supplies a simple method of adding field-level and table-level constraints, provides a graphic query-bydesign window, and generates Jet SQL statements for you.
Modeling Data
The first step in designing a relational (or any other) database is to determine what objects need to be represented by database entities and what properties of each of these objects require inclusion as attribute classes. The process of identifying the tables required in the database and the fields that each table needs is called data modeling. You can take two approaches when data modeling:
q
Application-oriented design techniques start with a description of the type of application(s) required by the potential users of the database. From the description of the application, you design a database that provides the necessary data. This is called the bottom-up approach, because applications are ordinarily at the bottom of the database hierarchy. Subject-oriented design methodology begins by defining the objects that relate to the subject matter of the database as a whole. This approach is called top-down database design. The content of the database determines what information front-end applications can present to the user.
Even though application-oriented design might let you quickly create an ad hoc database structure and the applications to accomplish a specific goal, bottom-up design is seldom a satisfactory long-term solution to an organization's information needs. It's common to find several application-oriented databases within
file:///H|/0-672-30913-0/vcg04.htm (15 of 31) [14/02/2003 03:06:07 ]
vcg04.htm
an organization that have duplicate data, such as independent customer lists. When the firm acquires a new customer, each of the customer tables needs to be updated. This is an inefficient and error-prone process. Subject-oriented database design is a far more satisfactory method. You might want to divide the design process into department-level or workgroup-related databases, such as those in the following list:
q
A sales database that has tables that are based on customer, order and line item, invoice and line item, and product entity classes. A production database with tables for parts, suppliers, bills of material, and cost accounting information. The product and invoice tables of the sales department's database would be attached to the production database. A personnel database with tables for employees, payroll data, benefits, training, and other subjects relating to human-resources management. The production and sales databases would attach to the employees tableproduction for the purposes of cost accounting purposes, and sales for commissions. An accounting database with tables comprising the general ledger and subsidiary ledgers. The accounting database would attach to the majority of the tables in the other databases to obtain access to current finance-related information. Accounting databases often are broken into individual orders, accounts receivable, accounts payable, and general ledger databases.
There is no fixed set of rules to determine which shared tables should be located in what database. Often, these decisions are arbitrary or are based on political, rather than logical, reasoning. Department-level databases are especially suited for multiuser Jet databases running on peer-to-peer networks with 30 or fewer users. Each department can have its own part-time database administrator (DBA) who handles backing up the database, granting and revoking the rights of users to share individual tables in the database, and periodically compacting the database to regain the space occupied by deleted records.
Database Diagrams
Diagramming relations between tables can help you visualize database design. Entity-relation (E-R) diagrams, also called entity-attribute-relation (EAR) diagrams, are one of the most widely used methods of depicting the relations between database tables. The E-R diagramming method was introduced by Peter Chen in 1976. An E-R diagram consists of rectangles that represent the entity classes (tables). Ellipses above table rectangles show the attribute class (field) involved in the relation. Pairs of table rectangles and field ellipses are connected by parallelograms to represent the relation between the fields. Figure 4.5
file:///H|/0-672-30913-0/vcg04.htm (16 of 31) [14/02/2003 03:06:07 ]
vcg04.htm
is an E-R diagram for the Customers and Invoices tables of the database described in Figure 4.3 and Table 4.1. The "1" and "m" adjacent to the table rectangles indicate a one-to-many relationship between the two tables. Figure 4.5. An entity-relationship diagram showing the relationship between the Customers and Invoices tables. E-R diagrams describe relations by predicates. One of the dictionary definitions of the noun "predicate" is "a term designating a property or relation." If you remember parsing sentences in English class, you'll observe that "Customers" is the subject, "Are Sent" is the predicate, and "Invoices" is the predicate object of a complete sentence. E-R diagrams can describe virtually any type of allowable relation between two tables if you add more symbols to the basic diagram shown in Figure 4.5. A large number of E-R diagrams are required to define relationships between the numerous entities in enterprise-wide databases.
Designing databases to accommodate the information requirements of an entire firm is a major undertaking. Thus, computer-aided software engineering (CASE) tools often are used to design complex database systems. CASE tools for database design usually include the following capabilities:
q
Business model generation: The first step in the use of a high-end CASE tool is to create an operational model of an enterprise, which consists of defining virtually every activity involved in operating the organization. Accurately modeling the operations of a large firm as they relate to information requirements is an extraordinarily difficult and time-consuming process. Schema development: A database schema is a diagram that describes the entire information system pictorially, usually with less detail than that offered by E-R diagrams. The schema for a large information system with a multiplicity of databases might cover an entire wall of a large office or conference room. Relation diagramming: Some CASE tools support several methods of diagramming relations between tables. Most, but not all, CASE tools support E-R diagrams, as well as other pictorial methods, such as Bachman diagrams. Data dictionary development: A data dictionary is a full description of each table in the database and each field of every table. Other properties of tables and fields, such as primary keys, foreign keys, indexes, field data types, and constraints on field values, are included. Creating data dictionaries is one of the subjects of Chapter 22, "Documenting Your Database Applications." Repository creation: A repository is a database that is part of the CASE tool. It contains all the
vcg04.htm
details of the structure and composition of the database. Data in the repository is used to create schema, relation diagrams, and data dictionaries. The repository is also responsible for maintaining version control when you change the database's design. When this book was written, Microsoft and Texas Instruments had joined forces to develop an object-oriented repository for 32bit Windows database development.
q
Database generation: After you've designed the database, the CASE tool creates the SQL Data Definition Language (DDL) statements necessary to create the database and its tables. You then send the statements to the RDBMS, which builds the database for you. Data flow diagramming: Some database CASE tools include the capability to create data flow diagrams that describe how data is added to tables and how tables are updated. However, data flow diagrams are application-related, not database-design-related. Thus, data flow diagramming capability is a prerequisite for qualification as a CASE database tool.
Mainframe database developers have a variety of CASE tools from which to choose. Several CASE tools serve the client-server market, such as Popkin Software's Data Architect. Developers using desktop DBMs haven't been so fortunate. No commercial CASE tools with the basic features in the preceding list were available for xBase and Paradox developers at the time this book was written. Database modeling tools are a simplified version of CASE tools for information systems development. Modeling tools omit the business modeling aspects of CASE tools but implement at least the majority of the features described in the preceding list. An example of a database modeling tool designed specifically for Jet databases is Asymetrix Inc.'s InfoModeler. InfoModeler is a full-fledged database design system that you can use to create Jet databases from structured English statements that define entity and attribute classes. InfoModeler is described more fully in Chapter 22.
vcg04.htm
form.
The section "Flat-File Databases" noted that the worksheet data model often contains inconsistent entities in rows. The stock prices example, shown in Figure 4.6, shows an Excel worksheet whose structure violates every rule applicable to relational database tables except attribute atomicity. STOCKS is a worksheet that lists the New York Stock Exchange's (NYSE) closing, high, and low price for shares and the sales volume of 25 stocks for a five-day period. Rows contain different entities, and columns B through F are repeating groups. The Stocks5.xls workbook in Excel 5.0/7.0 format, which is included on the CD-ROM that comes with this book, is used in the following examples. You'll find Stocks5.xls in the CHAPTR04 folder on the CD that comes with this book. Figure 4.6. A worksheet whose structure is the antithesis of proper database table design. You need to separate the entity classes according to the object each entity class represents. The four entity classes of the STOCKS worksheet of the Stocks5.xls workbook are the closing price, the highest transaction price, the lowest transaction price, and the trading volume of a particular stock on a given day. To separate the entity classes, you need to add a column so that the stock is identified by its abbreviation in each row. You can identify the data entities by their classesClose, High, Low, and Volumeplus the abbreviation for the stock, which is added to the new column with a simple recorded Excel VBA macro. Then, you sort the data with the Entity and Key columns. The result of this process appears as shown for the Stocks1 worksheet, shown in Figure 4.7. Figure 4.7. The STOCKS worksheet with entities sorted by entity class.
NOTE The dates column represents a mixed entity type (three prices in dollars and the volume in shares), but each entity is now identified by its type. Thus, you can divide the entities into separate tables at any point in the transformation process.
Now you have a table that contains entities with consistent attribute values, because you moved the inconsistent stock name abbreviation to its own attribute class, Key, and replaced the stock abbreviation in the Entity column A to a value consistent with the Entity attribute class Close. However, the repeatinggroups problem remains.
file:///H|/0-672-30913-0/vcg04.htm (19 of 31) [14/02/2003 03:06:07 ]
vcg04.htm
NOTE This chapter uses manual worksheet methods of manipulating tabular data because worksheet techniques such as selecting, cutting, and pasting groups of cells represent the easiest and fastest way of changing the structure of tabular data. If you need to transform a large amount of worksheet data into relational tables, you should use Visual C++ OLE Automation methods to automate the transformation process.
The process of transforming existing data into relational form is called normalization. Normalization of data is based on the assumption that you've organized your data into a tabular structure wherein the tables contain only a single entity class. The objectives of normalization of data include the following:
q
Eliminating duplicated information contained in tables Accommodating future changes to the structure of tables Minimizing the impact of changes to database structure on the front-end applications that process the data
The following sections describe the five steps that constitute full normalization of relational tables. In most cases, you can halt the normalization process at third normal form, or model. Many developers bypass fourth and fifth normal forms because these normalization rules appear arcane and inapplicable to everyday database design.
First normal form requires that tables be flat and contain no repeating groups. A data cell of a flat table may contain only one atomic (indivisible) data value. If your imported data contains multiple data items
file:///H|/0-672-30913-0/vcg04.htm (20 of 31) [14/02/2003 03:06:07 ]
vcg04.htm
in a single field, you need to add one or more new fields to contain each data item and then move the multiple data items into the new field. The Northwind.mdb sample database included with Access 95 has a Customers table whose Address field contains data that violates first normal formsome cells contain a two-line address. Figure 4.8 shows the Customers table of Northwind.mdb (which was NWIND.MDB in earlier versions of Access) in datasheet mode. The multiline addresses for Hungry Coyote Import Store and Island Trading violate the atomicity rule. Thus, you need another field, such as Location, to contain the second line of two-line entries in the Address field. For parcel delivery services such as Federal Express, you need the physical address in the Location field for firms that use post office boxes to receive their mail. Figure 4.8. First normal form violations in the Customers table of Access 95's Northwind.mdb sample database.
NOTE If you're an xBase or Paradox developer, you might think that adding a field to contain the physical location portion of the address causes unwarranted expansion of the size of the Customers table. This isn't the case with Jet tables, because Jet databases use variable-length fields for the Text field data type. If an entry is missing in the Location field for a customer, the field contains only the Null value in databases that support Null values. The size issue is applicable to fixed-width xBase and Paradox table fields, because you must provide enough width to accommodate both lines of the address, whether one or two fields are used to contain the address data. Jet tables use variable-length fields for Text entries. An alternative to an additional Location field is to create a separate table for location data that has an optional one-to-one relation with the Customers table. This is a less efficient process than accepting Null or blank values in records that don't have Location values.
Eliminating repeating groups is often a tougher process when you're transforming worksheet data. Four of the five columns of stock price data shown in Figure 4.7 are repeating groups. The quick and dirty method of eliminating repeating groups is a series of copy, cut, paste, and fill operations. You add another column to specify the date for each entry. Then you cut and paste the cells for the four repeating groups into the column renamed from the beginning date of the series to PriceVolume. The final step is to sort the data on the Entity, Key, and Date fields. A portion of the resulting worksheet (Stocks2) appears in Figure 4.9. Instead of 101 rows for the 25 stocks, you now have 501 rows and what appears to be a large amount of duplicated data. However, your data is now in first normal form.
vcg04.htm
Figure 4.9. The STOCKS worksheet transformed to first normal form in the Stocks2 worksheet.
Second normal form requires that all data in nonkey fields of a table be fully dependent on the primary key and on each element (field) of the primary key when the primary key is a composite primary key. "Fully dependent on" means the same thing as "uniquely identified by." It's clear from examining the data shown in Figure 4.9 that the only nonkey column of Stocks2 is PriceVolume. The Entity, Key, and Date fields are members of the composite primary key. The sorting process used in the preceding section proves this point. There is a controversy among database designers as to whether objects that have a common attribute class, such as price, should be combined into a single table with an identifier to indicate the type of price, such as List, Distributor, or OEM for productsor in this case, Close, High, and Low transaction prices for the day. This process is called subclassing an entity. There's no argument, however, that the volume data deserves its own worksheet, at least for now, so you cut the volume data from Stocks2 and paste it to a new worksheet called Volume. You can delete the Entity column from the Volume sheet, because the name of the sheet now specifies the entity class. The data in the volume sheet, with field names added, appears as shown in Figure 4.10. Each entity now is uniquely identified by the two-part composite primary key comprising the Key and Date fields. You can import the data from the Volume sheet into a table from any application that supports importing Excel 5.0/7.0 tables contained in a workbook, such as Access 95. Both the Stocks2 and Volume worksheets contain data in second normal form. Figure 4.10. The Volume worksheet in second normal form.
Third normal form requires that all nonkey fields of a table be dependent on the table's primary key and independent of one another. Thus, the data in a table must be normalized to second normal form in order to assure dependency on the primary key. The issue here is the dependencies of nonkey fields. A field is dependent on another field if a change in the value of one nonkey field forces a change in the value of another nonkey field. At this point in the normalization process, you have the following choices for how to design the table(s)
file:///H|/0-672-30913-0/vcg04.htm (22 of 31) [14/02/2003 03:06:08 ]
vcg04.htm
Leave the data remaining in the Stocks2 worksheet as is and create one Prices table, using the Entity column to subclass the price entity. This method requires a three-field composite key. Create a Prices table with three columns, High, Low, and Close, using Key and Date as the composite primary key. You could even add the volume data to the Volume field and then have only one record per stock day. Create three separate prices tables, High, Low, and Close, and use Key and Date as the composite primary key. In this case, you don't subclass the entities.
Deciding on a table structure that meets third normal form is a judgment call based on the meaning of the term independent. Are stock prices and trading volumes truly independent of one another? Are the opening, high, and low prices dependent on the vagaries of the stock market and the whims of traders, and thus independent of one another? These questions mix the concepts of dependence and causality. Although it's likely that a higher opening price will result in a higher closing price, the causality is exogenous to the data itself. Exogenous data is determined by factors beyond the control of any of the users of the database. The values of the data in the table are determined by data published by the NYSE after the exchange closes for the day. Thus, the values of each of the attribute classes are independent of one another, and you can choose any of the three methods to structure your stock prices table(s).
After you've determined that your data structure meets third normal form, the most important consideration is to avoid over-normalizing your data. Over-normalization is the result of applying too strict an interpretation of dependency at the third normal stage. Creating separate tables for high, low, and close prices, as well as share-trading volume, is overkill. You need to join three tables in a one-to-one relationship to display the four data values for a stock. This will be a very slow process unless you create indexes on the primary key of each table. You have four tables, so you need four indexes. Even after indexing, the performance of your stock prices database won't be as fast as a table that contains all the values. Plus, the four indexes will take up four times as much disk space as a single index on a table that contains fields for all four attributes. The rule for third normal form should have two corollary rules: 1. Combine all entities of an object class that can be uniquely identified by the primary (composite) key and whose nonkey values either are independent of one another or are exogenous to the database and all related databases into a single table, unless the combination violates fourth
file:///H|/0-672-30913-0/vcg04.htm (23 of 31) [14/02/2003 03:06:08 ]
vcg04.htm
normal form. Combining entities into a single table is called integrating data. 2. Decompose data into tables that require one-to-one relationships only when the relationship is optional or when you need to apply security measures to nonkey values and your RDBMS supports column-level permissions. Decomposing data means breaking a table into two or more tables without destroying the meaningfulness of the data.
Thus, you can find the answer to the question of which structure is best by considering the suggested corollary rules for third normal form. Create a new Stocks3 worksheet with fields for the high, low, and close prices, as well as for the trading volume. Then paste the appropriate cells to Stocks3 and add field names. Figure 4.11 shows the result of this process. Figure 4.11. The stock prices data in third normal form.
NOTE A more elegant method of transforming worksheet data with repeating data groups to third normal form would use Excel's new Pivot feature, introduced with Excel 5.0, to perform some of the transformation operations for you. (The Pivot feature is related to the TRANSFORM and PIVOT statements of Jet SQL, which are discussed in Chapter 8.)
Fourth normal form requires that independent data entities not be stored in the same table when many-tomany relations exist between these entities. If many-to-many relations exist between data entities, the entities aren't truly independent, so such tables usually fail the third normal form test. Fourth normal form requires that you create a relation table that contains any data entities that have many-to-many relations with other tables. The stock prices data doesn't contain data in a many-to-many relation, so this data can't be used to demonstrate decomposition of tables to fourth normal form.
vcg04.htm
Fifth normal form requires that you be able to exactly reconstruct the original table from the new table(s) into which the original table was decomposed or transformed. Applying fifth normal form to your resulting table is a good test to make sure you didn't lose data in the process of decomposition or transformation. The Stocks3 worksheet contains every piece of data that is contained in the original STOCKS worksheet, so, with enough cutting, pasting, and sorting, you could restore it. It's often a tedious process to prove compliance with fifth normal form. Fortunately, compliance with the fifth normal form rule doesn't require that you be able to use ANSI SQL statements to reconstruct the original table.
NOTE An excellent discussion of indexing as it relates to the query optimizer in MS SQL Server 6.0 appears in the book Microsoft SQL Server Unleashed (Sams Publishing, 1996).
vcg04.htm
In simplified terms, an index consists of a table of pointers to records or groups of records. The records that contain pointer values, usually with an unsigned long integer data type, are organized in a binary hierarchy to reduce the number of tests that are required to find a record that matches the search criteria. Indexes traditionally refer to the three levels of the hierarchy as the root, branch, and leaf level. (Here again is the analogy to an inverted tree.) However, the number of levels in the branch hierarchy actually depends on the number of records in the indexed table. The root leads to one of two branches, and each branch leads to another branch until you reach the leaf level, which is indivisible. The leaf level of the index contains the pointers to the individual records or, in the case of Jet and most client-server databases, the pages that contain the records. The branches contain pointers to other branches in the index or to the leaves. The exact method of indexing field values varies with the database manager you use. dBASE (.NDX and .MDX files), FoxPro (.IDX and .CDX), Clipper (.NTX), Paradox (.PX), Btrieve, and Jet indexes vary in structure. (Btrieve and Jet don't store indexes in separate files, so no file classifications are given for these two databases.) Regardless of the indexing method, indexing techniques reduce the number of records that must be searched to find the first record matching the search criteria. The most efficient indexes are those that find the first matching record with the fewest number of tests (passes) of the value of the indexed field.
Traditional desktop DBMs store fixed-width records in individual files and store indexes on the fields of the file in one or more index files. FoxPro 2+ and dBASE IV let you store multiple indexes for a single table in a single .CDX or .MDX file, respectively. The table files used by these database managers have fixed-length records, so you can identify a record by its offset (its distance in bytes) from the beginning of the data in the file, immediately after the header portion of the file. Thus, pointers to records in these files consist of offset values. Jet and the majority of client-server databases store indexes as special structures (not tables) within the database file. These database types support variable-length fields for Text (varchar), Memo (long varchar), OLE Object, and Binary field data types (varbinary and long varbinary). To prevent the tables from becoming full of holes when you delete records or extend the length of a variable-length field, Jet and SQL Server databases use pages to store data rather than records. Jet and SQL Server pages are 2K in
vcg04.htm
length, corresponding to the standard size of a cluster on a fixed disk of moderate size formatted by Windows 95 or DOS. (As the size of fixed disk partitions grows, so does the size of the clusters. As an example, the cluster size of a 1G drive using DOS's and Windows 95's FAT file system is 32K.) Thus, if you increase the number of characters in a text field, the worst-case scenario is that the RDBMS must move 2K of data to make room for the data. If there isn't enough empty space (called slack) in the page to hold the lengthened data, the RDBMS creates a new page and moves the data in the record to the new page. Figure 4.12 illustrates the structure of the 2K data pages employed by Jet and SQL Server databases with about 1.7K of variable-length records and roughly 350 bytes of slack. Figure 4.12. A page in a Jet or SQL Server database with variable-length records.
NOTE The only drawback to the page-locking methodology is that you lock an entire page when updating a record in the page. If the record you're updating is very small, the lock can affect a number of records that you aren't editing. If you use the optimistic locking technique offered by Jet and SQL Server, the lock is likely to be transparent to other database users, especially if your front-end application is very fast. dBASE and Paradox tables use record locking, which affects only the single record being edited. Hopefully, future versions of Jet and Microsoft SQL server will support individual record locking.
Btrieve table files include data pages and index pages. Data pages consist of separate fixed-length and variable-length pages. The size of all pages within a Btrieve file must be equal. You can specify the size of the pages in the range of 512 to 4,096 bytes when you create the table. Choosing the correct page size for the type of data and the average length of records in the table can have a profound effect on the performance of Btrieve databases.
Balanced-Tree Indexes
The most common method of indexing tables is the balanced binary tree (B-tree) method, originally proposed by Russian mathematicians G. M. Adleson-Velski and E. M. Landis in 1963. Prior to the B-tree method, editing, inserting, and deleting indexed fields of records caused the index trees to become lopsided, increasing the number of passes required to find the record or page that had a matching value. The balanced B-tree method reorganizes the tree to ensure that each branch connects to two other branches or to a leaf. Thus, the B-tree index needs to be reorganized each time you add or delete a record.
file:///H|/0-672-30913-0/vcg04.htm (27 of 31) [14/02/2003 03:06:08 ]
vcg04.htm
B-tree indexes speed decision-support queries at the expense of transaction-processing performance. In a B-tree index structure, the length of a search path to a leaf is never more than 145 percent of the optimum path.
There is a truism in the database industry regarding the indexing of the fields of tables: Index only the fields you need to index in order to enhance the performance of your database front ends, and don't index any other fields. The more indexes you add to a table, the longer it takes to update entries that affect the values of indexed fields and to add a new record, which requires updating all indexes. The problem here is knowing which fields improve application performance. The first step is to determine what your options are. The following list discusses how the database types supported by the Jet database engine handle the indexing of primary key fields:
q
Jet tables for which you specify primary key fields in a Tabledef object have no-duplicates indexes that are automatically created by the Jet database engine on these fields. Most client-server databases that you connect with the ODBC API also have no-duplicates indexes on primary key fields, although the indexes usually aren't created automatically. A no-duplicates index prevents the addition of a record with duplicate values in the primary key fields. You can't remove the primary key index or specify "duplicates OK" without deleting the primary key designation. Specify a clustered index for each table of client-server databases that support clustered indexes. (Microsoft and Sybase SQL Server offer clustered indexes.) Clustered indexes reorganize the pages of the database in the order of the clustered index. Chapter 20 discusses how to choose the field on which to add a clustered index. Paradox tables also require no-duplicates indexes on the primary key fields. If FILENAME.PX index file isn't in the same directory as the corresponding FILENAME.DB file, your application can read the file but not update it. (If FILENAME.PX is missing, the Jet database engine creates a nonupdatable Recordset object of the Snapshot type, rather than the Dynaset type, from the table data.) dBASE tables created by xBase applications don't support designating fields as primary key fields, and you can't create the equivalent of a no-duplicates index on dBASE tables. Even after you execute the xBase SET UNIQUE ON statement, xBase applications let you append as many records as you want with duplicate values in the indexed fields that you use in lieu of primary key fields. (The index ignores records with duplicate indexed field values.) Your application needs to test for duplicate values and prevent the addition of records that would cause duplicated indexed field values. (You need the index so that you can quickly find whether a duplicate value exists.) You can update data in dBASE table files with the Jet database engine, because no primary key
vcg04.htm
can be specified.
q
Btrieve files support key fields, but they don't designate primary key fields. Btrieve doesn't provide a no-duplicates option. (You can create a no-duplicates key field by using Btrieve's autoincrement extended key field type to insert a consecutive long integer value to the records you add.) Unlike dBASE, Btrieve adds 8 bytes of data to the field to indicate the chronological order of the addition of fields with duplicate key values. Your application needs to test for existing key values to prevent duplication.
After you've determined whether you need to create a (primary) key fields index, you need to consider which other fields to index. The following list provides some suggestions that apply to all database types:
q
Use short codes to identify entities that don't have properties designed to identify the entity, such as part numbers for products. Creating indexes on long text fields, such as product names, unnecessarily increases the size of the index table, slows performance, and wastes disk space. The other side of the coin is that searches for text fields such as product names sometimes happen often. Indexes based on numeric values usually have better performance than indexes on character fields. Using auto-incrementing fields (for example, Jet 3.0's AutoIncrement, formerly Counter, field data type) as a primary key field sometimes is feasible when you import existing data. Index the foreign key fields that participate in joins with other tables. Index the foreign key fields that your client will search most often. Don't create a separate index for the indexed fields of the composite primary key. Almost all database management systems enable searches on partial key matches, so such an index would duplicate the existing primary key index. Prior to Jet 2.0, you needed to use Table objects, not Dynaset or Snapshot objects, and the Seek method if you wanted to take advantage of indexes other than the primary key index in your Visual C++ 4.0 applications. When you use the Find... methods on Jet 2+ tables or queries, Jet uses the appropriate index (if it exists). Jet 2+ indexes use Rushmore technology (adopted from FoxPro), which provides a major improvement in indexed searches. 32-bit Jet 3.0 provides a major boost in the speed of indexing and indexed searches compared with Jet 2.0 and 2.5. Avoid using the Like "*Criteria" statements in Jet SQL and LIKE '%Criteria' statements in ANSI SQL. Queries that contain these statements can't use indexes. Don't try to create indexes on fields of the long data types, long varchar (Jet Memo fields) and long varbinary (Jet OLE object fields). Neither the Jet database engine nor client-server RDBMSs can create indexes on these field data types.
vcg04.htm
NOTE The Jet database engine uses query optimization techniques to choose which indexes to use when your application processes an SQL query.
If you follow the rules in the preceding list, you probably won't go too far wrong in choosing which fields of your tables to index. If you're using Jet tables and have a copy of Microsoft Access, comparing the performance of queries with and without foreign key indexes is a simple process. Jet indexes as a whole are much smaller than dBASE and FoxPro 2+ indexes. Thus, you can relegate disk space issues to a lower priority when determining how many indexes to add to a Jet table. Chapter 15, "Designing Online Transaction-Processing Applications," compares index sizes of Jet and dBASE tables.
NOTE You need at least several hundred records in order to test the effects of indexing foreign key fields on your application's performance. The more records, the better the test. If you or your client observes that the performance of your Visual C++ 4.0 front end deteriorates as the number of records in the database increases, you might have failed to add an index to an important foreign key. You won't need to change your application's code to utilize the new indexes, except where you use Seek operations on Recordset objects of the Table type, which you can now safely replace with Find... methods on Recordset objects of the Dynaset and Snapshot types. The Jet database engine's query optimizer automatically uses the new index when the new index aids a query's performance.
Summary
This chapter introduced you to the methodology of designing efficient relational database structures, including modeling tools for Jet databases and normalizing tables that you create from existing data sources. Entire books are devoted to database design techniques, and at least one large book covers only Object Role Modeling design methods. If you're interested in the theoretical side of relational databases,
file:///H|/0-672-30913-0/vcg04.htm (30 of 31) [14/02/2003 03:06:08 ]
vcg04.htm
including relational algebra, go to your local library and check out a copy of E. F. Codd's or Chris Date's book on the subject. Methods of indexing tables created by desktop DBMs and client-server RDBMSs received only cursory treatment here. Donald E. Knuth's Searching and Sorting, mentioned earlier in this chapter, provides an excellent introduction to indexing methods, despite the age of the book. The suggestions at the end of this chapter on how to optimize the indexing of tables are useful with any type of database. The next chapter introduces you to structured query language (SQL), the third language you need to understand in order to fully utilize this book and Visual C++ 4.0 as a database front-end generator. If you're an Access developer, you're probably used to cutting and pasting SQL statements from the SQL window of query design mode into the RecordSource property of combo boxes or into your Access C++ code. Visual C++ 4.0 doesn't have a query design mode or an SQL window, so you really do need to learn SQLspecifically, the Jet dialect of ANSI SQLto create commercial database applications with Visual C++ 4.0.
vcg05.htm
Reviewing the Foundations of SQL s Elements of SQL Statements s Differences Between SQL and Procedural Computer Languages s Types of ANSI SQL Writing ANSI SQL Statements s Categories of SQL Statements s The Formal Grammar of SQL s The Practical Grammar of a Simple SQL SELECT Statement s Using the MS Query Application to Explore Queries s SQL Operators and Expressions s Dyadic Arithmetic Operators and Functions s Calculated Query Columns s Monadic Text Operators, Null Value Predicates, and Functions s Joining Tables s Conventional Inner or Equi-Joins s Multiple Equi-Joins s OUTER JOINs s Theta Joins and the DISTINCTROW Keyword s Self-Joins and Composite Columns s SQL Aggregate Functions and the GROUP BY and HAVING Clauses Comparing the Access SQL Dialect and ODBC s ANSI SQL Reserved Words and Access SQL Keywords s Data Type Conversion Between ANSI SQL and Access SQL Summary
vcg05.htm
Structured query language (SQL) is the lingua franca of relational database management systems. Visual C++ 4.0 and Microsoft Access both use SQL exclusively to process queries against desktop, clientserver, and mainframe databases. Access includes a graphical query by example (QBE) toolthe query design mode windowto write Access SQL statements for you. You can develop quite sophisticated applications using Access without even looking at an SQL statement in Access's SQL window. Visual C++ 4.0 doesn't include a graphical QBE tool, so until some enterprising third-party developer creates a Query OLE Control, you'll need to learn Access SQL in order to create Visual C++ 4.0 applications that interact in any substantial way with databases.
NOTE Microsoft Access without a version number refers to Access 2.0 and Access for Windows 95, version 7.0. There are no significant differences between Access SQL in these two versions of Access. Access 2.0 added a substantial number of reserved words to the SQL vocabulary of Access 1.x. SQL statements for adding tables to Access databases, plus adding fields and indexes to Access tables, are discussed in Chapter 8, "Running Crosstab and Action Queries." Microsoft calls the Visual C++ 4.0 dialect of SQL Microsoft Jet Database Engine SQL. This book uses the term Access SQL because the dialect originated in Access 1.0.
The first part of this chapter introduces you to the standardized version of SQL specified by the American National Standards Institute (ANSI), a standard known as X.3.135-1992 and called SQL-92 in this book. (When you see the terms SQL-89 and SQL-92, the reference is to ANSI SQL, not the Access variety.) ANSI SQL-92 has been accepted by the International Standards Organization (ISO), a branch of the United Nations headquartered in Geneva, and the International Electrotechnical Commission (IEC) as ISO/IEC 9075:1992, "Database Language SQL." A separate ANSI standard, X.3.168-1989, defines "Database Language Embedded SQL." Thus, SQL-92 is a thoroughly standardized language, much more so than xBase, for which no independent standards yet exist. Today's client-server RDBMSs support SQL-89 and many of SQL-92's new SQL reserved words; many RDBMSs also add their own reserved words to create proprietary SQL dialects. A knowledge of ANSI SQL is required to use SQL passthrough techniques with the Jet 3.0 database engine and to employ Remote Data Objects (RDO) and Remote Data Control (RDC). SQL pass-through is described in Chapter 20, "Creating Front Ends for Client-Server Databases". The second part of this chapter, beginning with the section "Comparing the Access SQL Dialect and ODBC," discusses the differences between SQL-92 and Access SQL. If you're fluent in the ANSI versions of SQL, either SQL-89 or SQL-92, you'll probably want to skip to the latter part of this chapter, which deals with the flavor of SQL used by the Jet 3.0 database engine. Chapter 7, "Using the Open Database Connectivity API," describes how the Jet 3.0 database engine translates Access SQL into the
file:///H|/0-672-30913-0/vcg05.htm (2 of 42) [14/02/2003 03:06:14 ]
vcg05.htm
format used by ODBC drivers. Although this chapter describes the general SQL syntax for queries that modify data (called action queries by Access and in this book) and the crosstab queries of Access SQL, examples of the use of these types of queries are described in Chapter 8.
This book has made extensive use of the term query without defining what it means. Because Visual C++ 4.0 uses SQL to process all queries, this book defines query as an expression in any dialect of SQL that defines an operation to be performed by a database management system. A query usually contains at least the following three elements:
q
A verb, such as SELECT, that determines the type of operation A predicate object that specifies one or more field names of one or more table object(s), such as * to specify all of the fields of a table A prepositional clause that determines the object(s) in the database on which the verb acts, such as FROM TableName
vcg05.htm
The simplest SQL query that you can construct is SELECT * FROM TableName, which returns the entire contents of TableName as the query result set. Queries are classified in this book as select queries, which return data (query result sets), or action queries, which modify the data contained in a database without returning any data. IBM's original version of SQL, implemented as SEQUEL, had relatively few reserved words and simple syntax. Over the years, new reserved words have been added to the language by publishers of database management software. Many of the reserved words in proprietary versions of SQL have found their way into the ANSI SQL standards. Vendors of SQL RDBMSs that claim adherence to the ANSI standards have the option of adding their own reserved words to the language, as long as the added reserved words don't conflict with the usage of the ANSI-specified reserved words. Transact-SQL, the language used by the Microsoft and Sybase versions of SQL Server (both of these products were originally developed from the same product), has many more reserved words than conventional ANSI SQL. Transact-SQL even includes reserved words that allow conditional execution and loops within SQL statements. (The CASE, NULLIF, and COALESCE reserved words of SQL-92 are rather primitive for conditional execution purposes.) Access SQL includes the TRANSFORM and PIVOT statements needed to create crosstab queries that, while missing from ANSI SQL, are a very useful construct. TRANSFORM and PIVOT can be accomplished using ANSI SQL, but the construction of such an ANSI SQL statement would be quite difficult. A further discussion of the details of the syntax of SQL statements appears after the following sections, which describe the basic characteristics of the SQL language and tell you how to combine SQL and conventional 3GL source code statements.
All the dialects of SQL are fourth-generation languages (4GLs). The term fourth-generation derives from the following descriptions of the generations in the development of languages to control the operation of computers:
q
First-generation languages required that you program in the binary language of the computer's hardware, called object or machine code. (The computer is the object in this case.) As an example, in the early days of mini- and microcomputers, you started (booted) the computer by setting a series of switches that sent instructions directly to the computer's CPU. Once you booted the computer, you could load binary-coded instructions with a paper tape reader. 1GLs represent programming the hard way. The first computer operating systems (OS) were written directly in machine code and loaded from paper tape or punched cards. Second-generation languages greatly improved the programming process by using assembly
vcg05.htm
language to eliminate the necessity of setting individual bits of CPU instructions. Assembly language lets you use simple alphabetic codescalled mnemonic codes because they're easier to remember than binary instructionsand octal or hexadecimal values to substitute for one or more CPU instructions in the more arcane object code. Once you've written an assembly language program, you compile the assembly code into object code instructions that the CPU can execute. Microsoft's MASM is a popular assembly language compiler for Intel 80x86 CPUs. Assembly language remains widely used today when speed or direct access to the computer hardware is needed.
q
Third-generation languages, typified by the early versions of FORTRAN (Formula Translator) and BASIC (Beginners' All-purpose Symbolic Instruction Code), let programmers substitute simple statements, usually in a structured version of English, for assembly code. 3GLs are called procedural languages because the statements you write in a 3GL are procedures that the computer executes in the sequence you specify in your program's source code. Theoretically, procedural languages should be independent of the type of CPU for which you compile your source code. Few 3GL languages actually achieve the goal of being fully platform-independent; most, such as Microsoft Visual Basic and Visual C++, are designed for 80x86 CPUs. (You can, however, run Visual C++ applications on Digital Equipment's Alpha workstations and workstations that use the MIPS RISC (Reduced Instruction Set Computer) CPU using Windows NT as the operating system. In this case, the operating system, Windows NT, handles the translation of object code to differing CPUs.) Fourth-generation languages are often called nonprocedural languages. The source code you write in 4GLs tells the computer the ultimate result you want, not how to achieve it. SQL is generally considered to be a 4GL language because, for example, your SQL query statements specify the data you want the database manager to send you, rather than instructions that tell the DBM how to accomplish this feat. Whether SQL is a true 4GL is subject to controversy, because the SQL statements you write are actually executed by a 3GL or, in some cases, a 2GL language that deals directly with the data stored in the database file(s) and is responsible for sending your application the data in a format that the application can understand.
Regardless of the controversy over whether generic SQL is a 4GL, you need to be aware of some other differences between SQL and conventional 3GLs. The most important of these differences are as follows:
q
SQL is a set-oriented language, whereas most 3GLs can be called array-oriented languages. SQL returns sets of data in a logical tabular format. The query-return sets are dependent on the data in the database, and you probably won't be able to predict the number of rows (data set members) that a query will return. The number of members of the data set may vary each time you execute a query and also may vary almost instantaneously in a multiuser environment. 3GLs can handle only a fixed number of tabular data elements at a time, specified by the dimensions that you assign to a two-dimensional array variable. Thus, the application needs to know how many columns and rows are contained in the result set of an SQL query so that the application can
vcg05.htm
handle the data with row-by-row, column-by-column methods. Visual C++ 4.0's CRecordSet object handles this transformation for you automatically.
q
SQL is a weakly typed language, whereas most 3GLs are strongly typed. You don't need to specify field data types in SQL statements; SQL queries return whatever data types have been assigned to the fields that constitute the columns of the query return set. Most compiled 3GL languages are strongly typed. COBOL, C, C++, Pascal, Modula-2, and ADA are examples of strongly typed compiled programming languages. Strongly typed languages require that you declare the names and data types of all your variables before you assign values to the variables. If the data type of a query column doesn't correspond to the data type you defined for the receiving variable, an error (sometimes called an impedance mismatch error) occurs. Visual C++ is a compiled language and is strongly typed.
Consider yourself fortunate that you're using Visual C++ 4.0 to process SQL statements. You don't need to worry about how many rows a query will return or what data types occur in the query result set's columns. The CRecordSet object receiving the data handles all of these details for you. With Visual C++ 4's incremental compile and incremental link, you don't need to recompile and link your entire Visual C++ application each time you change a query statement; just change the statement and rebuild your application. Visual C++ compiles and links only the functions that have been changed. The process is really quite fast.
The current ANSI SQL standards recognize four different methods of executing SQL statements. The method you use depends on your application programming environment, as described in the following list:
q
Interactive SQL lets you enter SQL statements at a command line prompt, similar to dBASE's dot prompt. As mentioned in Chapter 1, "Positioning Visual C++ in the Desktop Database Market," the use of the interactive dBASE command LIST is quite similar to the SELECT statement in interactive SQL. Mainframe and client-server RDBMSs also provide interactive SQL capability; Microsoft SQL Server provides the isql application for this purpose. Using interactive SQL is also called direct invocation. Interactive SQL is called a bulk process; if you enter a query at the SQL prompt, the result of your query appears on-screen. DBMs offer a variety of methods of providing a scrollable display of interactive query result sets. Embedded SQL lets you execute SQL statements by preceding the SQL statement with a keyword, such as EXEC SQL in C. Typically, you declare variables that you intend to use to receive data from an SQL query between EXEC SQL BEGIN DECLARE SECTION and EXEC SQL END
vcg05.htm
DECLARE SECTION statements. You need a precompiler that is specific to the language and to the RDBMS to be used. The advantage of embedded SQL is that you assign attribute classes to a single variable in a one-step process. The disadvantage is that you have to deal with query-return sets on a row-by-row basis rather than the bulk process of interactive SQL.
q
Module SQL lets you compile SQL statements separately from your 3GL source code and then link the compiled object modules into your executable program. SQL modules are similar to Visual C++ 4.0 code modules. The modules include declarations of variables and temporary tables to contain query result sets, and you can pass argument values from your 3GL to parameters of procedures declared in SQL modules. The stored procedures that execute precompiled queries on database servers have many characteristics in common with module SQL. Dynamic SQL lets you create SQL statements whose contents you can't predict when you write the statement. (The preceding SQL types are classified as static SQL.) As an example of dynamic SQL, suppose you want to design a Visual C++ application that can process queries against a variety of databases. Dynamic SQL lets you send queries to the database in the form of strings. For example, you can send a query to the database and obtain detailed information from the database catalog that describes the tables and fields of tables in the database. Once you know the structure of the database, you or the user of your application can construct a custom query that adds the correct field names to the query. Visual C++'s implementation of Access SQL resembles a combination of dynamic and static SQL, although the Access database engine handles the details of reading the catalog information for you automatically when your application creates a Recordset object from the database. Chapter 6, "Understanding the Access Database Engine and DAO," describes the methods you use to extract catalog information contained in Visual C++ collections.
Technically, static SQL and dynamic SQL are called methods of binding SQL statements to database application programs. Binding refers to how you combine or attach SQL statements to your source or object code, how you pass values to SQL statements, and how you process query result sets. A third method of binding SQL statements is the call-level interface (CLI). The Microsoft Open Database Connectivity (ODBC) API uses the CLI developed by the SQL Access Group (SAG), a consortium of RDBMS publishers and users. A CLI accepts SQL statements from your application in the form of strings and passes the statements directly to the server for execution. The server notifies the CLI when the data is available and then returns the data to your application. Details of the ODBC CLI are given in Chapter 7. If you're a COBOL coder or a C/C++ programmer who is accustomed to writing embedded SQL statements, you'll need to adjust to Visual C++'s automatic creation of virtual tables when you execute a SELECT query, rather than executing CURSOR-related FETCH statements to obtain the query result rows one-by-one.
vcg05.htm
TRANSFORM Sum(CLng([Order Details].UnitPrice*Quantity* (1 - Discount)*100)/100) AS ProductAmount SELECT Products.ProductName, Orders.CustomerID FROM Orders, Products, [Order Details], Orders INNER JOIN [Order Details] ON Orders.OrderID = [Order Details].OrderID, Products INNER JOIN [Order Details] ON Products.ProductID = [Order Details].ProductID WHERE Year(OrderDate)=1994 GROUP BY Products.ProductName, Orders.CustomerID ORDER BY Products.ProductName PIVOT "Qtr " & DatePart("q",OrderDate) In("Qtr 1", "Qtr 2","Qtr 3","Qtr 4")
NOTE
vcg05.htm
The square brackets surrounding the [Order Details] table name are specific to Access SQL and are used to group table or field names that contain spaces or other punctuation that is illegal in the naming rules for tables and fields of SQL RDBMSs. Access SQL also uses the double quotation mark (") to replace the single quotation mark (or apostrophe) ('), which acts as the string identifier character in most implementations of SQL. The preceding example of the SQL statement for a crosstab query is based on the tables in Access 95's Northwind.MDB sample database. Many field names in Access 2.0's NWIND.MDB contain spaces; spaces are removed from field names in NorthWind.MDB.
The sections that follow describe how you categorize SQL statements and how the formal grammar of SQL is represented. They also provide examples of writing a variety of select queries in ANSI SQL.
ANSI SQL is divided into the following six basic categories of statements, presented here in the order of most frequent use:
q
Data-query language (DQL) statements, also called data retrieval statements, obtain data from tables and determine how that data is presented to your application. The SELECT reserved word is the most commonly used verb in DQL (and in all of SQL). Other commonly used DQL reserved words are WHERE, ORDER BY, GROUP BY, and HAVING; these DQL reserved words often are used in conjunction with other categories of SQL statements. Data-manipulation language (DML) statements include the INSERT, UPDATE, and DELETE verbs, which append, modify, and delete rows in tables, respectively. DML verbs are used to construct action queries. Some books place DQL statements in the DML category. Transaction-processing language (TPL) statements are used when you need to make sure that all the rows of tables affected by a DML statement are updated at once. TPL statements include BEGIN TRANSACTION, COMMIT, and ROLLBACK. Data-control language (DCL) statements determine access of individual users and groups of users to objects in the database through permissions that you GRANT or REVOKE. Some RDBMSs let you GRANT permissions to or REVOKE permissions from individual columns of tables. Data-definition language (DDL) statements let you create new tables in a database (CREATE
vcg05.htm
TABLE), add indexes to tables (CREATE INDEX), establish constraints on field values (NOT NULL, CHECK, and CONSTRAINT), define relations between tables (PRIMARY KEY, FOREIGN KEY, and REFERENCES), and delete tables and indexes (DROP TABLE and DROP INDEX). DDL also includes many reserved words that relate to obtaining data from the database catalog. This book classifies DDL queries as action queries because DDL queries don't return records.
q
Cursor-control language (CCL) statements, such as DECLARE CURSOR, FETCHINTO, and UPDATE WHERE CURRENT, operate on individual rows of one or more tables.
It's not obligatory that a publisher of a DBM who claims to conform to ANSI SQL support all of the reserved words in the SQL-92 standard. In fact, it's probably safe to state that, at the time this book was written, no commercial RDBMS implemented all the SQL-92 keywords for interactive SQL. The Jet 3.0 database engine, for example, doesn't support any DCL reserved words. You use the Data Access Object's programmatic security objects with Visual C++ reserved words and keywords instead. The Jet 3.0 engine doesn't need to support CCL statements, because neither Visual C++ 4.0 nor Access manipulates cursors per se. Visual C++ 4.0's Data control creates the equivalent of a scrollable (bidirectionally movable) cursor. The Remote Data Object supports the scrollable cursors of Microsoft SQL Server 6.0. This book uses the terminology defined by Appendix C of the Programmer's Reference for the Microsoft ODBC Software Development Kit (SDK) to define the following levels of SQL grammatical compliance:
q
Minimum: The statements (grammar) that barely qualify a DBM as an SQL DBM but not an RDBMS. A DBM that provides only the minimum grammar is not salable in today's market. Core: Comprising minimum grammar plus basic DDL and DCL commands, additional DML functions, data types other than CHAR, SQL aggregate functions such as SUM() and AVG(), and a wider variety of allowable expressions to select records. Most desktop DBMs, to which SQL has been added, support core SQL grammar and little more. Extended: Comprising minimum and core grammar, plus DML outer joins, more complex expressions in DML statements, all ANSI SQL data types (as well as long varchar and long varbinary), batch SQL statements, and procedure calls. Extended SQL grammar has two levels of conformance1 and 2.
vcg05.htm
The formal grammar of SQL is represented in the Backus Naur Form (BNF), which is used to specify the formal grammar of many computer programming languages. Here is the full BNF form of the verb that specifies the operation that a query is to perform on a database:
<action> ::= SELECT |DELETE |INSERT [ <left paren> <privilege column list> <right paren>] |UPDATE [ <left paren> <privilege column list> <right paren>] |REFERENCES |USAGE ... <privilege column list> ::= <column name list> ... <column name list> ::= <column name> [{<comma>, <column name>} ...] To use BNF representation, you locate the class (<action> in the preceding example) where the reserved word is included. Members of the class are separated by the vertical bar (|) character. Optional parameters of reserved words and elements are enclosed in square brackets ([]). Literal values, such as <privilege column list>, are enclosed in angle braces (<>), and elements that must be grouped, such as a comma preceding a second <column name>, are enclosed in French braces ({}). You then search the list of elements to find the allowable composition of an element. In this example, the <privilege column list> is composed of the <column name list>. Then check to see if <column name list> has a composition (in this case, one or more <column name> elements). This process is tedious, especially when the elements aren't arranged in alphabetical order. Microsoft uses a simplified form of BNF to describe the grammar supported by the present version of the ODBC API. The Access SQL syntax rules eliminate the use of the ::= characters to indicate the allowable substitution of values for an element. Instead, they substitute a tabular format, as shown in Table 5.1. Ellipses (...) in the table indicate that you have to search for the element; the element is not contiguous with the preceding element of the table. [ <left paren> <privilege column list> <right paren>]
vcg05.htm
Table 5.1. The partial syntax of the Access SQL SELECT statement. Element select-statement ... select-list select-sublist ... table-expression ... from-clause FROM table-reference-list from-clause|[where-clause]|[group-by-clause]|[having-clause]|[order-by-clause] *|select-sublist[{, select-sublist}...] table-name.*|expression [AS column-alias]|column-name Syntax SELECT[ALL|DISTINCT|DISTINCTROW] select-list table-expression
table-reference-list table-reference [{, table-reference}...] table-reference ... table-name base-table-name|querydef-name|attached-table-name|correlation-name table-name [AS correlation-name|joined-table]
NOTE The DISTINCTROW qualifier and the querydef-name element are specific to Access SQL. DISTINCTROW is discussed in the section "Theta Joins and the DISTINCTROW Keyword." Chapter 6 describes the Access QueryDef object.
After you've looked up all the allowable forms of the elements in the table, you might have forgotten the key word whose syntax you set out to determine. The modified Backus Naur form used by Microsoft is unquestionably easier to use than full BNF.
vcg05.htm
Here is a more practical representation of the syntax of a typical ANSI SQL statement, substituting underscores for hyphens:
SELECT [ALL|DISTINCT] select_list FROM table_names [WHERE {search_criteria|join_criteria} [{AND|OR search_criteria}] [ORDER BY {field_list} [ASC|DESC]] The following list explains the use of each SQL reserved word in the preceding statement:
q
SELECT specifies that the query is to return data from the database rather than modify the data in the database. The select_list element contains the names of the fields of the table that are to appear in the query. Multiple fields appear in a comma-separated list. The asterisk (*) specifies that data from all fields of a table is returned. If more than one table is involved (joined) in the query, you use the table_name.field_name syntax, in which the period (.) separates the name of the table from the name of the field. The ALL qualifier specifies that you want the query to return all rows, regardless of duplicate values; DISTINCT returns only nonduplicate rows. These qualifiers have significance only in queries that involve joins. The penalty for using DISTINCT is that the query will take longer to process. FROM begins a clause that specifies the names of the tables that contain the fields you include in your select_list. If more than one table is involved in select_list, table_list consists of commaseparated table names. WHERE begins a clause that serves two purposes in ANSI SQL: specifying the fields on which tables are joined, and limiting the records returned to records with field values that meet a particular criterion or set of criteria. The WHERE clause must include an operator and two operands, the first of which must be a field name. (The field name doesn't need to appear in the select_list, but the table_name that includes field_name must be included in the table_names list.) SQL operators include LIKE, IS {NULL|NOT NULL}, and IN, as well as the arithmetic operators<, <=, =, =>, >, and <>. If you use the arithmetic equal operator (=) and specify table_name.field_name values for both operands, you create an equi-join (also called an inner join) between the two tables on the specified fields. You can create left and right joins by using the special operators *= and =*, respectively, if your DBM supports outer joins. (Both left and right joins are called outer joins.) Types of joins are discussed in the section "Joining Tables."
vcg05.htm
NOTE If you use more than one table in your query, make sure that you create a join between the tables with a WHERE Table1.field_name = Table2.field_name clause. If you omit the statement that creates the join, your query will return the Cartesian product of the two tables. A Cartesian product is all the combinations of fields and rows in the two tables. This results in extremely large query-return set and, if the tables have a large number of records, it can cause your computer to run out of memory to hold the query result set. (The term Cartesian is derived from the name of a famous French mathematician, Ren Dscartes.)
ORDER BY defines a clause that determines the sort order of the records returned by the SELECT statement. You specify the field(s) on which you want to sort the query result set by the table_names list. You can specify a descending sort with the DESC qualifier; ascending (ASC) is the default. As in other lists, if you have more than one table_name, you use a comma-separated list. You use the table_name.field_name specifier if you have joined tables.
Depending on the dialect of SQL your database uses and the method of transmitting the SQL statement to the DBM, you might need to terminate the SQL statement with a semicolon. (Access SQL no longer requires the semicolon; statements you send directly to the server through the ODBC driver don't use terminating semicolons.)
The MS Query application that accompanies Visual C++ version 1.5 (\MSVC15\MSQUERY) is an excellent application that can be used to create SQL statements. Visual C++ 2.x and 4 don't include MS Query; however, because Visual C++ 1.5 is included with later versions of Visual C++, you can install that version from Visual C++ 1.5. Also, when you purchase Microsoft Office, you will receive a 32-bit version of Microsoft Query. It can be found on the Microsoft Office Pro CD in the \OS\MSAPPS\MSQUERY folder.
vcg05.htm
NOTE BIBLIO.MDB is included on the CD that comes with this book in the CHAPTR05 folder. Visual Basic users will have an Access database called BIBLIO, which is included with Visual Basic. Visual C++ users don't have this sample database. If you have Visual Basic, you can use the copy of BIBLIO included with Visual Basic or the copy included on the CD in the CHAPTR05 folder.
NOTE Query as found on the Visual C++ 1.5x CD (a 16-bit application) works only with the Access 2 version of BIBLIO. It might not work correctly with the second version, called BIBLIO 95, which is an Access 7 version of the database. MSQRY32 (the 32-bit version of MS Query, which is on the Microsoft Office CD) will work with the Access 7 version of BIBLIO. The 32-bit version of MS Query is a bit more reliable and should be used if possible.
MS Query falls into the category of ad hoc query generators. You can use MS Query to test some simple SQL statements by following these steps: 1. Start MS Query and choose File | New Query. 2. MS Query displays the Select Data Source dialog box. Select the BIBLIO datasource. If you haven't previously opened BIBLIO using MS Query, click the Other button to add BIBLIO to MS Query's list of datasources. 3. MS Query displays the Add Tables dialog box. Select the Authors table and click the Add button. Then click the Close button. 4. MS Query displays its Query 1 MDI child window. Click the SQL button on the toolbar. 5. Enter SELECT * FROM Authors in the SQL Statement window as a simple query to check whether MS Query works, as shown in Figure 5.1.
vcg05.htm
NOTE Access SQL statements require a semicolon statement terminator. The MS Query application doesn't need a semicolon at the end of the SQL statement. Adding a semicolon will disable MS Query's graphical query representation, but the query will still work as expected.
6. Click the OK button in the SQL dialog box. The query result set appears in the child window. 7. Click the SQL toolbar button. The query, reformatted to fully qualify all names, appears in the SQL dialog box. The query now reads SELECT Authors.Au_ID, Authors.Author FROM Authors Authors. Figure 5.2 shows a portion of the query result set and the reformatted SQL query.
Figure 5.2. The query result window and reformatted query in the MS Query application.
NOTE A typical result of this type of query, which returns 46 rows in .0547 seconds, is approximately 840 rows/second. A 486DX2/66 with local bus video and 16M of RAM was used for these tests. These rates represent quite acceptable performance for a Windows database front end.
8. Reopen the SQL dialog box and clear the current SQL query edit box of the SQL Statement window. Enter SELECT * FROM Publishers WHERE State = 'NY' in the SQL Statement window and then click the OK button. The results of this query appear in Figure 5.3.
Figure 5.3. A query that returns records for publishers located in New York.
NOTE In this case, the query-data return rate and the display rate are about 24 rows per second. The query-data return rate was reduced because there are more columns in the Publishers table (eight) than in the
file:///H|/0-672-30913-0/vcg05.htm (16 of 42) [14/02/2003 03:06:14 ]
vcg05.htm
Authors table (two). However, if the query-data return rate is inversely proportional to the number of columns, the rate should be 840 * 2 / 8, or 210 rows per second. The extrapolated grid display rate, 170 * 2 / 8, is 42.5 rows per second, which is closer to the 24 rows per second rate of the prior example and can be accounted for by the greater average length of the data in the fields. Part of the difference between 24 and 210 rows per second for the query-data return rate is because the Access database engine must load the data from the table on the fixed disk into a temporary buffer. If you run the query again, you'll find that the rate increases to 8 / 0.0625, or 128 rows per second. The remainder of the difference in the querydata return rate is because the Access database engine must test each value of the State field for 'NY'.
9. Open the SQL dialog box again and add ORDER BY Zip to the end of your SQL statement. Figure 5.4 shows the query and its result.
Figure 5.4. The records for publishers in New York sorted by zip code.
NOTE The data return rate will now have dropped to about 68 rows per second. The decrease in speed can be attributed to the sort operation that you added to the query. The data-return rates and data-display rates you achieve will depend on the speed of the computer you use. As a rule, each clause you add to a query will decrease the data-return rate because of the additional data-manipulation operations that are required.
10. Replace the * in the SELECT statement, which returns all fields, with PubID, 'Company Name', City so that only three of the fields appear in the SnapShot window. The result, shown in Figure 5.5, demonstrates that you don't have to include the fields that you use for the WHERE and ORDER BY clauses in the field_names list of your SELECT statement.
Figure 5.5. The query return set displaying only three fields of the Publishers table.
vcg05.htm
NOTE The single quotes (') surrounding Company Name are necessary when a field name or table name contains a space. Only Access databases permit spaces and punctuation other than the underscore (_) in field names. Using spaces in field and table names, or in the names of any other database objects, is not considered good database-programming practice. Spaces in database field names and table names appear in this book only when such names are included in sample databases created by others.
NOTE The MS Query toolbar provides a number of buttons that let you search for records in the table, filter the records so that only selected records appear, and sort the records on selected fields. A filter is the equivalent of adding a WHERE field_name where_expression clause to your SQL statement. The sort buttons add an ORDER BY field_names clause.
Microsoft designed the MS Query application to demonstrate the features of SQL and ODBC that pertain to manipulating and displaying data contained in the tables of databases. MS Query is a rich source of SQL examples. It also contains useful examples of user interface design techniques for database applications and MDI child forms.
As I mentioned earlier in this chapter, SQL provides the basic arithmetic operators (<, <=, =, =>, >, and <>). SQL also has a set of operators that are used in conjunction with values of fields of the text data type (LIKE and IN) and that deal with NULL values in fields (IS NULL and IS NOT NULL). The Access database engine also supports the use of many string and numeric functions in SQL statements to calculate column values of query return sets. (Few of these functions are included in ANSI SQL.)
NOTE
vcg05.htm
Access supports the use of user-defined functions (UDFs) in SQL statements to calculate column values in queries. Visual C++ and ODBC support only native functions that are reserved words, such as Val(). Functions other than SQL aggregate functions are called implementation-specific in ANSI SQL. Implementation-specific means that the supplier of the DBM is free to add functions to the supplier's implementation of ANSI SQL.
The majority of the operators you use in SQL statements are dyadic. Dyadic functions require two operands. (All arithmetic functions and BETWEEN are dyadic.) Operators such as LIKE, IN, IS NULL, and IS NOT NULL are monadic. Monadic operators require only one operand. All expressions that you create with comparison operators return True or False, not a value. The sections that follow describe the use of the common dyadic and monadic operators of ANSI SQL.
The use of arithmetic operators with SQL doesn't differ greatly from their use in Visual C++ or other computer languages. The following is a list of the points you need to remember about arithmetic operators and functions used in SQL statements (especially in WHERE clauses):
q
The = and <> comparison operators are used for both text and numeric field data types. The anglebrace pair "not-equal" symbol (<>) is equivalent to the != combination used to represent "not equal" in ANSI SQL. (The equals sign isn't used as an assignment operator in SQL.) The arithmetic comparison operators<, <=, =>, and >are intended primarily for use with operands that have numeric field data types. If you use the preceding comparison operators with values of the text field data type, the numeric ANSI values of each character of the two fields are compared in left-to-right sequence. The remaining arithmetic operators+, -, *, /, and ^ or ** (the implementation-specific exponentiation operator )aren't comparison operators. These operators apply only to calculated columns of query result sets, the subject of the next section. To compare the values of text fields that represent numbers, such as the zip code field of the Publishers table of BIBLIO.MDB, you can use the Val() function in a WHERE clause to process the text values as the numeric equivalent when you use the Access database engine. An example of this usage is SELECT * FROM Publishers WHERE Zip > 12000.
vcg05.htm
NOTE If you attempt to execute the preceding SQL statement in MS Query (but not MSQRY32), you might receive an error message (usually with no text), or sometimes MS Query will simply GPF. The error is caused by Null values in the Zip Code data cells of several publishers in the table. Most expressions don't accept Null argument values. Thus, you need to add an IS NOT NULL criterion to your WHERE clause. If you use the form WHERE (Zip > '12000' AND (Zip IS NOT NULL), you get the same error message, because the order in which the criteria are processed is the sequence in which the criteria appear in your SQL statement. Using WHERE Zip IS NOT NULL AND Zip > '12000' solves the problem. The syntax of the NULL predicates is explained in the section "Monadic Text Operators, Null Value Predicates, and Functions."
The BETWEEN predicate in ANSI SQL and the Between operator in Access SQL are used with numeric or date-time field data types. The syntax is field_name BETWEEN Value1 AND Value2. This syntax is equivalent to the expression field_name => Value1 OR field_name <= Value2. Access SQL requires you to surround date-time values with number signs (#), as in DateField Between #1-1-93# And #12-31-93#. You can negate the BETWEEN predicate by preceding BETWEEN with NOT.
NOTE Where Access SQL uses syntax that isn't specified by ANSI SQL, such as the use of number signs (#) to indicate date-time field data types, or where examples of complete statements are given in Access SQL, the SQL reserved words that are also keywords or reserved words appear in the upperand-lowercase convention.
Using Access, you can create calculated columns in query return sets by defining fields that use SQL
vcg05.htm
arithmetic operators and functions that are supported by the Access database engine or your client-server RDBMS. Ordinarily, calculated columns are derived from fields of numeric field data types. BIBLIO.MDB uses a numeric data type (the auto-incrementing long integer Counter field) for ID fields, so you can use the PubID field or Val(Zip) expression as the basis for the calculated field. Enter SELECT DISTINCTROW Publishers.Name, Val([Zip])*3 AS Zip_Times_3, Publishers.State FROM Publishers in Access's SQL query window. The query result set appears as shown in Figure 5.6. Figure 5.6. A calculated column added to the query against the publisher's table. The AS qualifier designates an alias for the column name, column_alias. If you don't supply the AS column_alias qualifier, the column name is empty when you use the Access database engine. Access provides a default AS Expr1 column alias for calculated columns; the column_alias that appears when you use ODBC to connect to databases is implementation-specific. IBM's DB2 and DB2/2, for example, don't support aliasing of column names with the AS qualifier. ODBC drivers for DB2 and DB2/2 may assign the field name from which the calculated column value is derived, or apply an arbitrary name, such as Col_1.
NOTE If you must include spaces in the column_alias, make sure that you enclose the column_alias in square brackets for the Access database engine and in single quotation marks for RDBMSs that support spaces in column_alias fields. (Although you might see column names such as Col 1 when you execute queries against DB2 or other mainframe databases in an emulated 3270 terminal session, these column_alias values are generated by the local query tool running on your PC, not by DB2.) If you use single or double quotation marks with the Access database engine, these quotation marks appear in the column headers.
One of the most useful operators for the WHERE criterion of fields of the text field data type is ANSI SQL's LIKE predicate, called the Like operator in Access SQL. (The terms predicate and operator are used interchangeably in this context.) The LIKE predicate lets you search for one or more characters you specify at any location in the text. Table 5.2 shows the syntax of the ANSI SQL LIKE predicate and the Access SQL Like operator used in the WHERE clause of an SQL statement.
file:///H|/0-672-30913-0/vcg05.htm (21 of 42) [14/02/2003 03:06:15 ]
vcg05.htm
Table 5.2. Forms of the ANSI SQL LIKE and Access SQL Like predicates. ANSI SQL Access SQL Description What It Returns ram, rams, damsel, amnesty Johnson, Johnsson Johnson, Anderson Glenn, Glens
LIKE '%am%' Like "*am*" Matches any text that contains the characters. LIKE 'John%' Like "John*" beginning with the characters. LIKE '%son' LIKE 'Glen_' LIKE '_am' LIKE '_am%' Like "*son" ending with the characters.
Like "Glen?" Matches the text and any single trailing character. Like "?am"
Matches the text and any single preceding dam, Pam, ram character. dams, Pam, Ramses
Like "_am*" with one preceding character and any trailing characters.
The IS NULL and IS NOT NULL predicates (Is Null and Is Not Null operators in Access SQL) test whether a value has been entered in a field. IS NULL returns False and IS NOT NULL returns True if a value, including an empty string "" or 0, is present in the field. The SQL-92 POSITION() function returns the position of characters in a test field using the syntax POSITION(characters IN field_name). The equivalent Access SQL function is InStr(field_name, characters). If characters are not found in field_name, both functions return 0. The SQL-92 SUBSTRING() function returns a set of characters with SUBSTRING(field_name FROM start_position FOR number_of_characters). This function is quite useful for selecting and parsing text fields.
Joining Tables
As I mentioned earlier in this chapter, you can join two tables by using table_name.field_name operands with a comparison operator in the WHERE clause of an SQL statement. You can join additional tables by combining two sets of join statements with the AND operator. SQL-86 and SQL-89 supported only WHERE joins. You can create equi-joins, natural equi-joins, left and right equi-joins, not-equal joins, and self-joins with the WHERE clause. Joins that are created with the equals (=) operator use the prefix
file:///H|/0-672-30913-0/vcg05.htm (22 of 42) [14/02/2003 03:06:15 ]
vcg05.htm
equi. SQL-92 added the JOIN reserved words, plus the CROSS, NATURAL, INNER, OUTER, FULL, LEFT, and RIGHT qualifiers, to describe a variety of JOINs. At the time this book was written, few RDBMSs supported the JOIN statement. (Microsoft SQL Server 4.2, for example, doesn't include the JOIN statement in Transact-SQL.) Access SQL supports INNER, LEFT, and RIGHT JOINs with SQL-92 syntax using the ON predicate. Access SQL doesn't support the USING clause or the CROSS, NATURAL, or FULL qualifiers for JOINs. A CROSS JOIN returns the Cartesian product of two tables. The term CROSS is derived from crossproduct, a synonym for Cartesian product. You can emulate a CROSS JOIN by leaving out the join components of the WHERE clause of a SELECT statement that includes a table name from more than one table. Figure 5.7 shows Access 95 displaying the first few rows of the 29-row Cartesian product created when you enter SELECT Publishers.Name, Authors.Author FROM Publishers, Authors in the SQL Statement window. There are seven Publishers records and 42 Authors records; thus, the query returns 294 rows (7 * 42 = 294). It is highly unlikely that you would want to create a CROSS JOIN in a commercial database application. Figure 5.7. The first few rows of the 29-row Cartesian product from the Publishers and Authors table. The common types of joins that you can create with SQL-89 and Access SQL are described in the following sections.
NOTE All joins except the CROSS JOIN or Cartesian product require that the field data types of the two fields be identical or that you use a function (where supported by the RDBMS) to convert dissimilar field data types to a common type.
The most common type of join is the equi-join or INNER JOIN. You create an equi-join with a WHERE clause using the following generalized statement:
file:///H|/0-672-30913-0/vcg05.htm (23 of 42) [14/02/2003 03:06:15 ]
vcg05.htm
SELECT Table1.field_name, ... Table2.field_name ... FROM Table1, Table2 WHERE Table1.field_name = Table2.field_name The SQL-92 JOIN syntax to achieve the same result is as follows:
SELECT Table1.field_name, ... Table2.field_name ... FROM Table1 INNER JOIN Table2 ON Table1.field_name = Table2.field_name A single-column equi-join between the PubID field of the Publishers table and the PubID field of the Titles table of the BIBLIO.MAK table appears as follows:
SELECT Publishers.Name, Titles.ISBN, Titles.Title FROM Publishers INNER JOIN Titles ON Publishers.PubID = Titles.PubID; When you execute this query with Access 95, the Publishers and Titles tables are joined by the PubID columns of both fields. Figure 5.8 shows the result of this join. Figure 5.8. Access displaying an equi-join on the Publishers and Titles fields.
NOTE The INNER qualifier is optional in SQL-92 but is required in Access SQL. If you omit the INNER qualifier when you use the Access database engine, you receive the message Syntax error in FROM clause when you attempt to execute the query.
NOTE
file:///H|/0-672-30913-0/vcg05.htm (24 of 42) [14/02/2003 03:06:15 ]
vcg05.htm
Natural equi-joins create joins automatically between identically named fields of two tables. Natural equi-joins eliminate the necessity of including the ON predicate in the JOIN statement. Access SQL doesn't support the NATURAL JOIN statement.
The Access SQL statements that you create in the graphical QBE design mode of Access generate an expanded JOIN syntax. Access separates the JOIN statement from a complete FROM clause with a comma and repeats the table names in a separate, fully defined join statement. Using the Access SQL syntax shown in the following example gives the same result as the preceding ANSI SQL-92 example:
SELECT DISTINCTROW Publishers.Name, Titles.ISBN, Titles.Title FROM Publishers, Titles, Publishers INNER JOIN Titles ON Publishers.PubID = Titles.PubID The purpose of the DISTINCTROW statement in Access SQL is discussed in the section "Comparing the Access SQL Dialect and ODBC" later in this chapter. Here is the equivalent of the two preceding syntax examples, using the WHERE clause to create the join:
SELECT Publishers.Name, Titles.ISBN, Titles.Title FROM Publishers, Titles WHERE Publishers.PubID = Titles.PubID There is no difference between using the INNER JOIN and the WHERE clause to create an equi-join.
NOTE Equi-joins return only rows in which the values of the joined fields match. Field values of records of either table, which don't have matching values in the other table, don't appear in the query result set returned by an equi-join. If there is no match between any of the records, no rows are returned. A query without rows is called a null set.
file:///H|/0-672-30913-0/vcg05.htm (25 of 42) [14/02/2003 03:06:15 ]
vcg05.htm
Multiple Equi-Joins
You can create multiple equi-joins to link several tables by pairs of fields with common data values. For example, you can link the Publishers, Titles, and Authors tables of BIBLIO.MAK with the following SQL-92 statement:
SELECT Publishers.Name, Titles.Title, Titles.Au_ID, Authors.Author FROM Publishers INNER JOIN Titles ON Publishers.PubID = Titles.PubID, INNER JOIN Authors ON Titles.Au_ID = Authors.Au_ID You need to include the Titles.Au_ID field in the query because the second join is based on the result set returned by the first join. Access SQL, however, requires that you explicitly define each INNER JOIN with the following syntax:
SELECT DISTINCTROW Publishers.Name, Titles.Title, Titles.Au_ID, Authors.Author FROM Publishers, Titles, Authors, Publishers INNER JOIN Titles ON Publishers.PubID = Titles.PubID, Titles INNER JOIN Authors ON Titles.Au_ID = Authors.Au_ID The query result set from the preceding Access SQL query appears in Figure 5.9.
file:///H|/0-672-30913-0/vcg05.htm (26 of 42) [14/02/2003 03:06:15 ]
vcg05.htm
Figure 5.9. The query result set with three tables joined. Here is the equivalent of the preceding example using the WHERE clause:
SELECT Publishers.Name, Titles.Title, Titles.Au_ID, Authors.Author FROM Publishers, Titles, Authors, WHERE Publishers.PubID = Titles.PubID AND Titles.Au_ID = Authors.Au_ID
NOTE As a rule, using the WHERE clause to specify equi-joins results in simpler query statements than specifying INNER JOINs. When you need to create OUTER JOINs, the subject of the next section, you might want to use INNER JOIN statements to maintain consistency in Access SQL statements.
OUTER JOINs
INNER JOINs (equi-joins) return only rows with matching field values. OUTER JOINs return all the rows of one table and only those rows in the other table that have matching values. There are two types of OUTER JOINs:
q
LEFT OUTER JOINs return all rows of the table or result set to the left of the LEFT OUTER JOIN statement and only the rows of the table to the right of the statement that have matching field values. In WHERE clauses, LEFT OUTER JOINs are specified with the *= operator. RIGHT OUTER JOINs return all rows of the table or result set to the right of the RIGHT OUTER JOIN statement and only the rows of the table to the left of the statement that have matching field values. WHERE clauses specify RIGHT OUTER JOINs with the =* operator.
vcg05.htm
It is a convention that joins are created in one-to-many form; that is, the primary table that represents the "one" side of the relation appears to the left of the JOIN expression, or the operator of the WHERE clause and the related table of the "many" side appears to the right of the expression or operator. You use LEFT OUTER JOINs to display all of the records of the primary table, regardless of matching records in the related table. RIGHT OUTER JOINs are useful for finding orphan records. Orphan records are records in related tables that have no related records in the primary tables. They are created when you violate referential integrity rules. The SQL-92 syntax for a statement that returns all Publishers records, regardless of matching values in the Titles table, and all Titles records, whether authors for individual titles are identified, is as follows:
SELECT Publishers.Name, Titles.Title, Titles.Au_ID, Authors.Author FROM Publishers LEFT OUTER JOIN Titles ON Publishers.PubID = Titles.PubID, LEFT OUTER JOIN Authors ON Titles.Au_ID = Authors.Au_ID The equivalent joins using the WHERE clause are created by the following query:
SELECT Publishers.Name, Titles.Title, Titles.Au_ID, Authors.Author FROM Publishers, Titles, Authors, WHERE Publishers.PubID *= Titles.PubID AND Titles.Au_ID *= Authors.Au_ID Access SQL requires you to use the special syntax described in the preceding section, and it doesn't permit you to add the OUTER reserved word in the JOIN statement. Here is the Access SQL equivalent of the previous query example:
vcg05.htm
FROM Publishers, Titles, Authors, Publishers LEFT JOIN Titles ON Publishers.PubID = Titles.PubID, Titles LEFT JOIN Authors ON Titles.Au_ID = Authors.Au_ID Figure 5.10 shows the result of running the preceding query against the BIBLIO.MDB database. Figure 5.10. The result of substituting LEFT JOIN for INNER JOIN in a query against the BIBLIO.MDB database.
NOTE Access SQL doesn't support the *= and =* operators in WHERE clauses. You need to use the LEFT JOIN or RIGHT JOIN reserved words to create outer joins when you use the Access database engine. This restriction doesn't apply to SQL pass-through queries that you execute on servers that support *= and =* operators, such as Microsoft and Sybase SQL Server.
You can create joins using comparison operators other than =, *=, and =*. Joins that are not equi-joins are called theta joins. The most common form of theta join is the not-equal (theta) join, which uses the WHERE table_name.field_name <> table_name.field_name syntax. The BIBLIO.MDB database doesn't contain tables with fields that lend themselves to demonstrating not-equal joins. However, if you have a copy of Access's NorthWind.MDB sample database, you can execute an Access SQL query to find records in the Orders table that have a Ship Address value that differs from the Address value in the Customers field by employing the following query:
SELECT
vcg05.htm
Orders.[Ship Address] FROM Customers, Orders, Customers INNER JOIN Orders ON Customers.[Customer ID] = Orders.[Customer ID] WHERE ((Orders.[Ship Address]<>[Customers].[Address])) The preceding query results in the query return set shown in Figure 5.11. Figure 5.11. A not-equal theta join to display customers whose shipping and billing addresses differ. If you execute the same query without Access SQL's DISTINCTROW qualifier, you get the same result. However, if you substitute the ANSI SQL DISTINCT qualifier for Access SQL's DISTINCTROW, the result is distinctly different, as shown in Figure 5.12. Figure 5.12. The effect of applying the DISTINCT qualifier to the query shown in Figure 5.11. The query result set shown in Figure 5.12 is created by the following statement, which is the same in Access SQL and ANSI SQL, disregarding the unconventional table names enclosed in square brackets:
SELECT DISTINCT Customers.[Company Name], Customers.Address, Orders.[Ship Address] FROM Customers, Orders WHERE Customers.[Customer ID] = Orders.[Customer ID] AND Orders.[Ship Address] <> Customers.Address The DISTINCT qualifier specifies that only rows that have differing values in the fields specified in the SELECT statement should be returned by the query. Access SQL's DISTINCTROW qualifier causes the return set to include each row in which any of the values of all of the fields in the two tables (not just the fields specified to be displayed by the SELECT statement) differ.
vcg05.htm
A self-join is a join created between two fields of the same table having similar field data types. The first field is usually the primary key field, and the second field of the join ordinarily is a foreign key field that relates to the primary key field, although this isn't a requirement for a self-join. (This may be a requirement to make the result of the self-join meaningful, however.) When you create a self-join, the DBM creates a copy of the original table and then joins the copy to the original table. No tables in BIBLIO.MDB offer fields on which you can create a meaningful self-join. The Employees table of NorthWind.MDB, however, includes the Reports To field, which specifies the Employee ID of an employee's supervisor. Here is the Access SQL statement to create a self-join on the Employee table to display the name of an employee's supervisor:
SELECT Employees.[Employee ID] AS EmpID, Employees.[Last Name] & ", " & Employees.[First Name] AS Employee, Employees.[Reports To] AS SupID, EmpCopy.[Last Name] & ", " & EmpCopy.[First Name] AS Supervisor FROM Employees, Employees AS EmpCopy, Employees INNER JOIN EmpCopy ON Employees.[Reports To] = EmpCopy.[Employee ID] You create a temporary copy of the table, named EmpCopy, with the FROM... Employees AS EmpCopy clause. Each of the query's field names is aliased with an AS qualifier. The Employee and Supervisor columns are composite columns whose values are created by combining a last name, a comma, and a space with the first name. The query result set from the preceding SQL statement appears in Figure 5.13. Figure 5.13. The query result set of a self-join on the Employees table of NorthWind.MDB. ANSI SQL doesn't provide a SELF INNER JOIN, but you can create the equivalent by using the ANSI
file:///H|/0-672-30913-0/vcg05.htm (31 of 42) [14/02/2003 03:06:15 ]
vcg05.htm
version of the preceding statement. You can substitute a WHERE Employees.[Reports To] = EmpCopy.[Employee ID] clause for the INNER JOIN...ON statement.
NOTE Self-joins are relatively uncommon, because a table that is normalized to fourth normal form wouldn't include an equivalent of the Reports To field. A separate table would relate the Employee ID values of employees and supervisors. However, creating a separate table to contain information that can be held in single table without ambiguity is generally considered overnormalization. This is the primary reason that most developers stop normalizing tables at the third normal form.
ANSI SQL includes set functions (called SQL aggregate functions in this book), which act on sets of records. The standard SQL-92 aggregate functions are described in the following list. The field_name argument of the functions can be the name of a field (with a table_name. specifier, if required) or the allfields specifier, the asterisk (*).
q
COUNT(field_name) returns the number of rows that contain NOT NULL values of field_name. COUNT(*) returns the number of rows in the table or query without regard for NULL values in fields. MAX(field_name) returns the largest value of field_name in the set. MIN(field_name) returns the smallest value of field_name in the set. SUM(field_name) returns the total value of field_name in the set. AVG(field_name) returns the arithmetic average (mean) value of field_name in the set.
The SQL aggregate functions can act on persistent tables or virtual tables, such as query result sets. Here is the basic syntax of queries that use the SQL aggregate functions:
vcg05.htm
SELECT FUNCTION(field_name|*) [AS column_alias] This example returns a single record with the value of the SQL aggregate function you choose. You can test the SQL aggregate functions with BIBLIO.MDB using the following query:
SELECT COUNT(*) AS Count, SUM(PubID) AS Total, AVG(PubID) AS Average, MIN(PubID) AS Minimum, MAX(PubID) AS Maximum FROM Publishers Figure 5.14 shows the result of the preceding aggregate query. Figure 5.14. The five SQL aggregate functions for the Publishers table. Databases with significant content usually have tables that contain fields representing the classification of objects. The BIBLIO.MDB database doesn't have such a classification, but the Products tables of NorthWind.MDB classifies an eclectic assortment of exotic foodstuffs into eight different categories. You use the GROUP BY clause when you want to obtain values of the SQL aggregate functions for each class of an object. The GROUP BY clause creates a virtual table called, not surprisingly, a grouped table. The following Access SQL query counts the number of items in each of the eight food categories included in the Category ID field and then calculates three total and average values for each of the categories:
SELECT [Category ID] AS Category, COUNT(*) AS Items, Format(AVG([Unit Price]), "$#,##0.00") AS Avg_UP, SUM([Units in Stock]) AS Sum_Stock, SUM([Units on Order]) AS Sum_Ordered
vcg05.htm
NOTE The preceding query uses the Access SQL Format() function to format the values returned for average unit price (Avg_UP) in conventional monetary format. This feature is not found in ANSI SQL.
The result of the preceding query appears in Figure 5.15. Figure 5.15. Using GROUP BY with the SQL aggregate functions. You might want to restrict group (category) membership using a particular criteria. You might think that you could use a WHERE clause to establish the criteria, but WHERE clauses apply to the entire table. The HAVING clause acts like a WHERE clause for groups. Therefore, if you want to limit the applicability of SQL aggregate functions to a particular set or group, you would add the HAVING clause and the IN() operator, as in the following Access SQL example, which returns only rows for the BEVR and COND categories:
SELECT [Category ID] AS Category, COUNT(*) AS Items, Format(AVG([Unit Price]), "$#,##0.00") AS Avg_UP, SUM([Units in Stock]) AS Sum_Stock, SUM([Units on Order]) AS Sum_Ordered FROM Products GROUP BY [Category ID] HAVING [Category ID] IN('BEVR', 'COND') The result of the preceding query appears in Figure 5.16. Figure 5.16. Using GROUP BY with the SQL aggregate functions using the IN() operator.
file:///H|/0-672-30913-0/vcg05.htm (34 of 42) [14/02/2003 03:06:15 ]
vcg05.htm
Access SQL doesn't support ANSI SQL Data Definition Language (DDL) statements. You modify the Tables, Fields, and Indexes collections with Visual C++ code to create or modify database objects. You can use the Microsoft ODBC Desktop Database Drivers kit to provide limited (ODBC Extended Level 1) DDL capability. Access SQL doesn't support ANSI SQL Data Control Language (DCL), and Visual C++ doesn't offer an alternative method of granting and revoking user permissions for database objects. The Microsoft ODBC Desktop Database Drivers kit provides limited (ODBC Extended Level 1) DCL capability. Access SQL doesn't support subqueries. To create the equivalent of a subquery, you need to execute a second query against a Dynaset object created by a query.
The following sections summarize the differences between the keywords of Access SQL and the reserved words of ANSI SQL, as well as how Access SQL deals with the data types defined by ANSI SQL.
ANSI SQL reserved words, by tradition, are set in uppercase type. Reserved words in ANSI SQL may not be used as names of objects, such as tables or fields, or as names of parameters or variables used in SQL statements. This book refers to elements of Access SQL syntax as keywords because, with the exception of some Access SQL functions, Access SQL keywords aren't reserved words in Visual C++. Table 5.3 lists the ANSI SQL reserved words.
Table 5.3. ANSI SQL reserved words that correspond to Access SQL keywords.
vcg05.htm
HAVING JOIN IN ON
SELECT SET
ORDER WITH
This list shows the ANSI set functions that are identical to Access SQL aggregate functions: COUNT() SUM() AVG() MIN() MAX() Table 5.4 lists the commonly used ANSI SQL reserved words (including functions) and symbols that don't have a directly equivalent Access SQL reserved word or symbol. This table doesn't include many of the new reserved words added to SQL-89 by SQL-92, because these reserved words hadn't yet been implemented in the versions of client-server RDBMS that had been released as commercial products at the time this book was written.
Table 5.4. Common ANSI SQL reserved words that don't have a direct equivalent in Access SQL. Reserved Word ALL ALTER TABLE ANY Category Substitute DQL DDL DQL Applies only to subqueries. Use Fields collection. Applies only to subqueries. Access SQL doesn't support DCL. Visual C++ MFC BeginTrans() member function. Access SQL doesn't support DDL.
vcg05.htm
CLOSE COMMIT CREATE INDEX CREATE TABLE CREATE VIEW CURRENT CURSOR DECLARE DROP INDEX DROP TABLE DROP VIEW FETCH FOREIGN KEY GRANT IN subquery POSITION() PRIMARY KEY PRIVILEGES REFERENCES REVOKE ROLLBACK SUBSTRING() UNION UNIQUE WORK *= =*
DCL TPL DDL DDL DDL CCL CCL CCL DDL DDL DDL CCL DDL DCL DQL DQL DDL DCL DDL DCL TPL DQL DQL DDL TPL DQL DQL
Access SQL doesn't support DCL. Visual C++ MFC CommitTrans() member function. Use Indexes collection. Use Tables collection. Equivalent to a Snapshot object. Scrollable cursors are built into Dynaset and Snapshot objects. Scrollable cursors are built into Dynaset and Snapshot objects. Scrollable cursors are built into Dynaset and Snapshot objects. Use Indexes collection. Use Tables collection. Use Close method on Snapshot object. Field name of a Dynaset or Snapshot object. Access SQL doesn't support DDL Access SQL doesn't support DCL. Use a query against a query Dynaset instead of a subquery. Use InStr(). Access SQL doesn't support DDL. Access SQL doesn't support DCL. Access SQL doesn't support DDL. Access SQL doesn't support DDL. Visual C++ MFC Rollback() member function. Use Mid() functions. UNIONs currently aren't supported by Access SQL. Access SQL doesn't support DDL. Not required by Visual C++ CDataBase transaction functions. Use LEFT JOIN. Use RIGHT JOIN.
vcg05.htm
DQL DQL
Use the <> for not equal. Use the PARAMETERS statement (if needed).
Table 5.5 lists Access SQL keywords that aren't reserved words in ANSI SQL. Many of the Access SQL keywords describe data types that you specify by using the DB_ constants. Data type conversion to and from ANSI SQL is discussed shortly.
Table 5.5. Access SQL keywords and symbols that aren't reserved words or symbols in ANSI SQL. Access SQL BINARY ANSI SQL No equivalent Category Description DDL Presently, not an official Access data type (used for SID field in SYSTEM.MDA). Logical field data type (0 or -1 values only). Asc()/Chr() data type; 1-byte integer (tinyint of SQL Server). Currency data type. Date/time field data type (Variant subtype 7). Creates an updatable Dynaset object. Double-precision floating-point number. Defines fixed-column headers for crosstab queries. Long integer data type. OLE Object field data type. Memo field data type. Runs queries with (OPTION) object owner's permissions.
BOOLEAN BYTE CURRENCY DATETIME DISTINCTROW DOUBLE IN predicate with crosstab queries LONG LONGBINARY LONGTEXT (WITH) OWNERACCESS
No equivalent No equivalent No equivalent No equivalent No equivalent REAL No equivalent INT[EGER] No equivalent No equivalent No equivalent
DDL DDL DDL DDL DQL DDL DQL DDL DDL DDL DQL
vcg05.htm
PARAMETERS
No equivalent
DQL
User- or program-entered query parameters. Should be avoided in Visual C++ code. Use in crosstab queries. Integer data type; 2 bytes. Single-precision real number. Text data type. Specifies a crosstab query. Single character with Like. Zero or more characters. Single digit, 0 through 9. Encloses date/time values. Access uses ! as a separator.
PIVOT SHORT SINGLE TEXT TRANSFORM ? (LIKE wildcard) * (LIKE wildcard) # (LIKE wildcard) # (date specifier) <> (not equal)
VARCHAR[ACTER] DDL No equivalent _ (wildcard) % (wildcard) No equivalent No equivalent != DQL DQL DQL DQL DQL DQL
Access SQL provides the four SQL statistical aggregate functions listed in Table 5.6 that are not included in ANSI SQL. These Access SQL statistical aggregate functions are set in upper- and lowercase type in the Microsoft documentation but are set in uppercase type in this book.
Table 5.6. Access SQL statistical aggregate functions. Access Function Description STDDEV() STDDEVP() VAR() VARP() Standard deviation of a population sample. Standard deviation of a population. Statistical variation of a population sample. Statistical variation of a population.
Table 5.7 lists the Access SQL keywords that often appear in upper- and lowercase rather than the alluppercase SQL format of the Microsoft documentation.
Table 5.7. Typesetting conventions for Access SQL keywords and ANSI SQL reserved words.
vcg05.htm
Access SQL ANSI SQL and This Book And Avg() Between Count() Is Like Max() Min() Not Null Or Sum() AND AVG() BETWEEN COUNT() IS LIKE MAX() MIN() NOT NULL OR SUM()
Table 5.8 lists the data types specified by ANSI SQL-92 and the equivalent data types of Access SQL when equivalent data types exist. Categories of ANSI SQL data types precede the SQL-92 data type identifier.
Table 5.8. Data type conversion to and from ANSI SQL and Access SQL. ANSI SQL-92 Exact Numeric INTEGER SMALLINT NUMERIC[(p[, s])] Access SQL Number Long (integer) Integer Not supported long int short int 4 bytes 2 bytes p = precision,s = scale C Datatype Comments
vcg05.htm
DECIMAL[(p[, s])] Approximate Numeric REAL DOUBLE PRECISION FLOAT Character (Text) CHARACTER[(n)]
Not supported Number Double (precision) double Not supported Single (precision) Text String char * char * float
p = precision,s = scale
CHARACTER VARYING String Bit Strings BIT[(n)] BIT VARYING Datetimes DATE TIME TIMESTAMP Not supported Not supported Date/Time None supported Not supported Not supported
TIME WITH TIME ZONE Not supported TIMESTAMP WITH TIME ZONE Intervals (Datetimes) None supported Not supported
Many of the data types described in the Access SQL column of Table 5.8 as not being supported are converted by ODBC drivers to standard ODBC data types that are compatible Access SQL data types. When you use attached database files, data types are converted by the Access database engine's ISAM driver for dBASE, FoxPro, Paradox, and Btrieve files. Data type conversion by ODBC and ISAM drivers is one of the subjects of the next chapter.
Summary
vcg05.htm
It's impossible to fully describe all the reserved words and syntax of Structured Query Language in a single chapter, especially when the chapter also must compare a particular dialect of SQLAccess SQLto a "standard" version of the language. This is particularly true when the standard language is new, as is the case for SQL-92, and when no RDBMSs support more than a fraction of the reserved words added to SQL-89 by the new standard. For a full exposition of SQL-92, you need a reference guide, such as Jim Melton and Alan R. Simon's Understanding the New SQL: A Complete Guide (see the section "A Visual C++ and Database Bibliography" in this book's Introduction). This chapter introduced newcomers to SQLfirst to the ANSI variety, and then to the Access dialect. ANSI SQL (as implemented by the Microsoft ODBC API) must use the SQL pass-through technique, which lets you process queries on the back-end server of a client-server database. In order to use the Access database engine to process queries, you also need to know the Access dialect of SQL. There are many examples of both ANSI SQL and Access queries in this book, so you've just started down the path to fluency in using SQL with Visual C++ database applications. The next chapter delves into the innards of the Access database engine and its relationship to the ODBC API. Chapter 7 shows you how Visual C++ applications interface with ODBC drivers. The last chapter in Part II expands your SQL vocabulary to Access SQL's crosstab query syntax and shows you how to write SQL statements that modify the data in database tables.
vcg06.htm
s s
s s s
Defining the Characteristics of Data Objects s Jet Data Access and Remote Data Objects s Instances of Data Objects s Persistent Member Objects s Recordset Objects Created from Virtual Tables s Dynamic Recordset Objects s Static Recordset Objects s Consistency Issues with Recordset Objects Understanding the Properties and Methods of the DAO DBEngine Object Defining the Workspace and Database Objects s Properties and Methods of the Workspace Object s Properties of the Database Object s Methods Applicable to the Database Object s Connecting to an Existing Jet Database Using the TableDefs Collection and TableDef Objects s The Attributes Property of TableDef Objects s Understanding Flags and Intrinsic Symbolic Constants s Mapping Database Member Objects with the CDaoTableDef Collection s Mapping the Fields and Indexes Collections Using the QueryDefs Collection and QueryDef Objects Creating Tables with C++ Code Creating and Using Recordset Objects s Properties of Recordset Objects s Methods Applicable to Recordset Objects and Collections Summary
vcg06.htm
book to describe the container (the base or master class) for all the data-related objects discussed in this chapter. The first part of this book gave you a brief introduction to the DAO and its member objects. This chapter describes the structure of the Jet DAO in detail, because the member objects of the DAO constitute the foundation on which the majority of your Visual C++ database applications are built. This chapter features examples that use the DAO's member objects to create instances of objects with C++ code and display the properties of the objects in list boxes. By the time you complete this rather lengthy chapter, it's very likely that you will have learned more than you ever wanted to know about data-related objectsor, more simply, data objects.
NOTE There is not a one-to-one mapping between DAO objects and the MFC DAO classes. Wherever possible, I will show where a DAO object's functionality is found in an MFC DAO class member that is different.
NOTE Technically, you should be able to alter any property of a programmable object by assigning an appropriate value to the Set member of the function pair. The ability to set property values under specific conditions depends on the type of object and the application in which the object is used. Access 1.x, for example, had many objects whose properties could be set only in design mode. Access 2.0 and Access 7.0 have far fewer of these "frozen" objects. Visual C++ 4.0 doesn't even have a design mode, as do Visual Basic and Access.
The Jet DAO is an OLE Automation in-process server that provides an object-oriented wrapper for the DLLs that comprise the Jet database engine. OLE Automation provides indirect access to properties and methods of programmable objects through a set of predefined interfaces. As a Visual C++ 4 programmer, you don't have to take
file:///H|/0-672-30913-0/vcg06.htm (2 of 57) [14/02/2003 03:06:26 ]
vcg06.htm
any special steps to use the DAO features. Figure 6.1 shows AppWizard creating a DAO database project called DataDict. Figure 6.1. Visual C++ 4.0's AppWizard creating the DataDict project. OLE Automation server applications are selective about which programmable objects and member functions are accessible to other applications. Making member functions of OLE Automation server applications accessible to OLE Automation container applications is called exposing the member function. OLE Automation servers have two classes of functions, Public and Private. Only Public functions are exposed to OLE Automation client applications, such as Visual C++. Once you create a reference to an OLE Automation server object, Visual C++'s Class Browser provides a convenient list of the collections and objects exposed by the server, plus the member functions of each object. Figure 6.2 shows the CDaoRecordset constructor in the Visual C++ 4 Browser. The syntax for the selected method or property appears to the right of the ? button, which opens the help topic for the property or method, as shown in Figure 6.3. Figure 6.2. Visual C++ 4.0's Browser displaying the syntax for the CDaoRecordset::CDaoRecordset() constructor. Figure 6.3. The online help topic for the CDaoRecordset::CDaoRecordset() constructor. The Data Access Object classes in Visual C++ 4.0 are implemented by the seven main CDao... classes. (See Chapter 13, "Understanding MFC's DAO Classes," for more information on the MFC DAO classes.) These classes include all the objects that let you create, connect to, and manipulate the supported database types. This book uses the term compound object to describe an object that contains other objects to maintain consistency with OLE's compound document terminology. Like OLE's compound documents, compound objects have a hierarchical structure. Objects that are contained within other objects are called member objects of the container object. Visual C++ 4.0 treats member objects as properties of the container object. Figure 6.4 illustrates the hierarchy of Visual C++ 4.0's DAO database classes. Access 2.0 and 7.0 have Forms, Reports, Scripts (macros), and Modules Documents collections that aren't supported in Visual C++ 4.0. In Visual C++ 4.0, Container objects and Documents collections are used to secure Jet databases in conjunction with System.mdw workgroup files that you create with Microsoft Access. (Visual C++ 4.0 can't create a workgroup file, previously called a system file.) Note that the CDaoFieldExchange class isn't derived from CObject. Figure 6.4. The hierarchy of Visual C++ 4.0's Jet 3.0 Data Access Object. The following sections describe the version of the Jet DAO included with Visual C++ 4.0, how you create member objects of the data access object, and one method of classifying these member objects (by their persistency). As is the case for many other disciplines, the taxonomy of database objects isn't a settled issue. Detailed information on the properties and methods of the data objects discussed in the following sections appears later in this chapter.
CAUTION You might find this chapter to be somewhat difficult. It has a grand mix of DAO terminology and MFC DAO terminology. For example, there is such a thing as a DAO
file:///H|/0-672-30913-0/vcg06.htm (3 of 57) [14/02/2003 03:06:26 ]
vcg06.htm
Recordset, and there is also an MFC DAO CDaoRecordset class. Most of the methods and properties that are contained in the DAO Recordset are found as either member functions or member variables in the MFC DAO CDaoRecordset class object. However, that one-to-one relationship doesn't always carry through. Some of the native DAO methods and properties are simply not supported in the MFC DAO classes. I will tell you when a feature of DAO isn't directly supported. For unsupported features, you can call directly to the DAO engine. See Books Online, MFC Technical Notes, Number 54 for more information about interacting directly with DAO. It will infrequently be necessary to directly interact with DAO.
Visual Basic programmers will realize that there are actually three different versions of DAO. Visual C++ 4 is shipped only with DAO 3.0, a 32-bit version of DAO. MFC 4's DAO classes are only 32-bit. For programmers who are still developing 16-bit applications, DAO is just not available. Here are the three versions of DAO (each of which can be found in your Windows 95 \Program Files\Common Files\Microsoft Shared\Dao folder):
q
The Microsoft DAO 3.0 Object Library is the standard 32-bit OLE Automation "wrapper" (Dao3032.dll) for the Jet 3.0 database engine (MSJT3032.DLL in \Windows\System). Dao3032.dll is included with Visual C++ 4.0 and can be used only with 32-bit applications. Microsoft Access 7.0 uses the DAO 3.0 Object Library and the Jet 3.0 database engine. The Microsoft DAO 2.5/3.0 Compatibility Library is an alternative 32-bit OLE Automation type library (Dao2532.dll). Dao2532.dll isn't included with Visual C++ 4.0. The Microsoft DAO 2.5 Object Library is a 16-bit OLE Automation wrapper (Dao2516.dll) for the Jet 2.0 database engine of Access 2.0 (Msajt200.dll). Dao2516.dll isn't included with Visual C++ 4.0 and is of no use to Visual C++ programmers because DAO isn't supported by the 16-bit version(s) of MFC.
Figure 6.5 shows the relationships between the Jet database engine used by Microsoft Access 1.1, 2.0, and 7.0 (Access 95), and Visual C++ 4.0. Figure 6.5. A simplified comparative diagram of the 16-bit and 32-bit implementations of the Jet database engine. With Visual C++ 4, the programmer is limited to using DAO 3.0. With DAO 3.0, you can create, open, or attach tables from 1.0, 1.1, 2.5, and 3.0 Jet databases. There's not too much in the way of limitations. Follow these guidelines:
vcg06.htm
q
Use the CompactDatabase() member function of the CDaoWorkspace class to convert Jet databases from one version to another for use with Visual C++ 4.0. The CompactDatabase() function doesn't convert Accessspecific objects, such as forms, reports, macros, and modules, from one version to another. Don't use the CompactDatabase() method to convert Jet 1.0, 1.1, or 2.5 .MDB files to version 3.0 if you plan to use the Jet 3.0 .MDB file with Access 7.0. Access 7.0 can't open an .MDB file that you convert from an earlier version to version 3.0 with Visual C++ 4.0's CompactDatabase() method. Visual C++ 4.0 can't create workgroup (System.mdw) files that are necessary to secure .MDB databases. You need a copy of Access 7.0 to create a 32-bit workgroup file or a copy of Access 2.0 to create a 16-bit system file (SYSTEM.MDA). Workgroup or system files usually reside on a workgroup or file server, together with shared .MDB files. Don't convert existing 16-bit Access system files (SYSTEM.MDA) to 32-bit workgroup files (System.mdw) until all users of the system file have converted to Jet 3.0 databases. You can attach a 16-bit SYSTEM.MDA file to a Jet 3.0 database without difficulty.
NOTE Use Access's database conversion feature to upgrade versions of .MDB files that contain Access-specific objects. Converting Access 1.0 and 1.1 files is a two-way process. However, once you use Access to convert a version 1.0 or 1.1 .MDB file to Access 2.0 format, or convert a 1.0, 1.1, or 2.0 file to Access 7.0 format, the process isn't reversible. It's a safer practice to use the appropriate version of Access for all Jet database conversion operations.
You create an instance of the Data Access Object when you create an application that uses DAO and then reference that application's DAO object(s). Each time you add a reference (such as DataDict's dialog boxes, shown later), you add a reference to the object, not the object itself. With the data access object, a single CDaoDatabase object is created, and it is referenced throughout your application. You create an instance of the DAO when you use an MFC DAO class that creates a DAO connection. Here's how you create an instance of the CDaoDatabase object data type (object class) for an existing database: 1. You declare an object variable with CDaoDatabase db. 2. You instantiate (create a pointer to) the ne