100% found this document useful (1 vote)
674 views420 pages

Using Oracle8

Using Oracle8 identifies the many functions an Oracle DBA needs to perform. You learn about designing and creating a database and its tablespaces. You also learn about creating and maintaining users and performing an upgrade.

Uploaded by

boil35
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
674 views420 pages

Using Oracle8

Using Oracle8 identifies the many functions an Oracle DBA needs to perform. You learn about designing and creating a database and its tablespaces. You also learn about creating and maintaining users and performing an upgrade.

Uploaded by

boil35
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 420

informit.com -- Your Brain is Hungry.

InformIT - Introduction From: Using Oracle8

You are here : Home : Using Oracle8

Introduction
Exact Phrase All Words Search Tips

Contents

Next >

Save to MyInformIT

From: Using Oracle8 Author: David Austin Publisher: Que More Information

q q q q q q q q

Introduction Who Should Use This Book Why This Book? How This Book Is Organized Conventions Used in This Book About the Authors Acknowledgments Tell Us What You Think!

Introduction
Welcome to Using Oracle8! This book identifies the many functions an Oracle DBA needs to perform on an Oracle8 database and explains how to do them as efficiently and effectively as possible. You learn about the key functions of database administration, including installing the product, designing and creating a database and its tablespaces, designing and creating the tables and other objects that make up an Oracle database, designing and executing a good backup strategy with a recovery methodology, and monitoring and tuning performance. You also learn about creating and maintaining users and performing an upgrade to Oracle8, as well as other tasks that you may need in your position as DBA. You also learn when and how to use the various tools Oracle8 provides to assist you in database management, performance monitoring and tuning, data loading, backup and recovery, and data export and import. The book is designed to let you read about a topic at length when you have the time and the inclination, or to use as a quick reference guide when you need an answer to a pressing technical question or an example to follow when performing a specific task. Using Oracle8 contains cross-references to related topics so that you can look at all aspects of a topic, even if they're covered in different chapters. These cross-references also enable you to read the book in any order you choose. If you run across a subject you don't fully understand, you can easily switch your attention to the area(s) identified and carry on your reading there. Where applicable, the book also references the Oracle documentation materials, so you can find even more detail if you need it. Don't forget to keep this book handy at work, just in case you need to check something in a hurry that you haven't read about yet or is a new topic to you. Be sure also to use the tear-out card inside the book's cover. It contains some of the most common, but difficult to remember, information you'll need.

Who Should Use This Book


You'll get the most out of this book if you have some background in the SQL language and some knowledge or experience with relational databases. Because Oracle's SQL language is based on the ANSI standard, it's not discussed in detail in this book, but numerous examples use SQL statements. The theory of relational databases is also outside the scope of this book, as are the internal structures within Oracle, except where they're needed to

http://www.informit.com/content/0789716534/element_000.shtml (1 of 6) [26.05.2000 16:45:57]

informit.com -- Your Brain is Hungry. InformIT - Introduction From: Using Oracle8

help you understand how or why to perform a specific task. This book is intended primarily for DBAs who have some knowledge of relational databases. Much of this book will be familiar if you've worked with earlier releases of Oracle-but you'll find the new Oracle8 features discussed. If you've worked with other relational databases, you may need to refer to the glossary if you find brand new terms or terms that have different meanings in Oracle. If you haven't worked with any relational databases, you should expect to follow the frequent cross-references to sections of the book; this will fill in background information as you read about a topic.

Why This Book?


Have you ever purchased a Using book from Que? The Using books have proven invaluable to readers as both learning guides and as references for many years. The Using series is an industry leader and has practically become an industry standard. We encourage and receive feedback from readers all the time, and consider and implement their suggestions whenever possible. This book isn't a compiled authority on all the features of Oracle8; instead, it's a streamlined, conversational approach for using Oracle8 productively and efficiently. This book has many features: q Improved index. What do you call tasks and features? As we wrote this book, we anticipated every possible name or description of a feature or database activity. q Real-life answers. Throughout the book you find our real-life examples and experiences. We recommend how to organize your database on the logical as well as the physical level. We suggest what values to use when assigning physical storage attributes to your tables, indexes, and other database objects, and how to determine if you have made a good set of choices. After all, we've been there, done that! We understand that how to perform a task is only one question you may have, and perhaps the bigger questions are "Why?" and "What for?" q Relevant information written just for you. We have carefully scrutinized which features and tasks to include in this book and have included those that apply to your everyday use of Oracle. Why invest in material that teaches you how to perform tasks you may never need to perform? q Reference or tutorial. You can learn to quickly perform a task using step-by-step instructions, or you can investigate the why and wherefore of a task with our discussions preceding each task. q Wise investment. We don't waste your valuable bookshelf space with redundant or irrelevant material, nor do we assume you "know it all" or need to know it all. Here is what you need, when you need it, how you need it, with an appropriate price tag. q Easy-to-find procedures. Every numbered step-by-step procedure in the book has a short title explaining exactly what it does. This saves you time by making it easier to find the exact steps you need to accomplish a task.

How This Book Is Organized


Using Oracle8 has task-oriented, easy-to-navigate tutorials and reference information presented in a logical progression from simple to complex tasks. It covers features of the program you use in your daily work. Examples are real life. You can work through the book lesson by lesson, or you can find specific information when you need to perform a job quickly. Using Oracle8 is divided into nine parts: q Part I, Building Your Oracle Database. Part I introduces you to relational databases in general and to the basic tools used to build and manage an Oracle8 database, whether you're creating a database from scratch or converting from an earlier release. q Part II, Customizing Your Oracle Database. Part II shows you how to build the appropriate storage units for your database objects. It's here you also find out how to manage the shared structures required for Oracle to function in the multiuser environment, including redo log files, rollback segments, and temporary segments. q Part III, Managing Data and Index Segments. Part III provides the details needed to build Oracle database tables and indexes, including information on sizing them and assigning appropriate physical characteristics and alternate structures. You find out about the logical and physical design of data and index information, including optional segment structures such as index-organized tables. q Part IV, Managing Users and Database Security. Part IV explains how to create user ids and manage user access to the database and its objects. You are also introduced to methods for monitoring and controlling resource usage. The chapters in this section include detailed information on the new password-management features introduced in Oracle8.
http://www.informit.com/content/0789716534/element_000.shtml (2 of 6) [26.05.2000 16:45:57]

informit.com -- Your Brain is Hungry. InformIT - Introduction From: Using Oracle8


q

Part V, Backing Up Your Oracle Database. Part V covers the various options available for protecting your database contents against loss due to hardware failure. You learn how to restore data that's lost when failures occur. The chapters in this section also cover the Recovery Manager tools, introduced as part of Oracle8. Part VI, Tuning Your Database Applications. In Part VI you learn about the various tools and techniques that DBAs and application developers should consider when building and tuning applications. These include performance-analysis tools and various resource-saving design considerations such as indexes, clustering techniques, optimizer selection, and constraint management. Part VII, Tuning Your Database. Part VII addresses the issues related to gathering and analyzing performance information about your database. The chapters in this section include information on tools available for these tasks, as well as how to interpret various statistics available to you and how to respond to performance degradation caused by various factors. Part VIII, Using Additional Oracle Tools and Options. In Part VIII you learn about the various tools provided by Oracle as part of the base product that can help you manage your database and the data within it, plus network access between your applications and the database. This section also summarizes the features available with the products that you can optionally license for added functionality if needed, such as Oracle Parallel Server and the Object option. Additional information available at our Web site (www.mcp.com/info). Appendix A, "Essential PL/SQL: Understanding Stored Procedures, Triggers, and Packages," includes a comprehensive guide to the PL/SQL language and the database constructs you can build with it. Appendix B, "What's New to Oracle8," lists the Oracle8's new features for those of you who are familiar with earlier Oracle releases and just need to identify what changes you may want to study and implement in your own database. Appendix C, "Installing Oracle8," covers the basic steps you need to follow in order to install a new version of the database, whether you're upgrading from Oracle7 or installing Oracle8 directly.

Now look at the detailed table of contents, decide what you want to read now or in the near future, and begin getting comfortable with Oracle8.

Conventions Used in This Book


The following items are some of the features that will make this book easier for you to use: q Cross-references. We've looked for all the tasks and topics related to a topic at hand and referenced them for you. If you need to look for coverage that leads up to what you're working on, or if you want to build on the new skill you just mastered, you have the references to easily find the right coverage in the book: SEE ALSO Information on tablespace usage for different segment types, see page xxx q Glossary terms. For all terms that appear in the glossary, you'll find the first appearance of that term italicized in the text. q SideNotes. Information related to the task at hand or "inside" information from the authors is offset in SideNotes, so they won't interfere with the flow of the text, yet make it easy to find valuable information. Each SideNote has a short title to help you quickly identify the information you'll find there. Oracle's syntax for commands, scripts, and SQL statements also incorporate special elements. Look at the following syntax example: ALTER DATABASE [database_name] ADD LOGFILE [GROUP [group_number]] filename [SIZE size_integer [K|M]] [REUSE] Terms that are italicized are considered placeholders. When you use the command, you replace the italicized word with an appropriate value. For example, database_name in the preceding code would be replaced with an actual database name. Square brackets ([]) in command syntax indicate optional clauses. The brackets around [database_name] in the preceding code indicate that you aren't required to provide a database name. Don't include the brackets when you use the command. The | character indicates that you choose one item or the other, not both. (For example, you can choose either K for kilobytes or M for megabytes for the preceding command.) Again, don't use this character in the actual command. Ellipses (...) in listings indicate either a clause that can repeat or skipped code that's not pertinent to the discussion. Don't use the ellipses in the actual code.

http://www.informit.com/content/0789716534/element_000.shtml (3 of 6) [26.05.2000 16:45:57]

informit.com -- Your Brain is Hungry. InformIT - Introduction From: Using Oracle8


q

Line numbers are included in some code listings to make discussion about the code easier to reference. Don't include the numbers with any command-line commands, as part of any Oracle scripts, or within SQL statements.

About the Authors


David Austin has been in the data processing profession for almost 25 years. He had worked with many database architectures, including hierarchical, net-work, and relational, before becoming an Oracle DBA about 10 years ago. For the past five years, David has worked for Oracle Corporation, where he's now employed as a senior principal curriculum developer. His previous positions at Oracle include senior principal consultant and senior principal instructor. David has completed multiple Masters programs with Oracle Education and is a Certified Oracle Database Administrator. David is a contributing author for Que's Special Edition Using Oracle8 and wrote Chapters 3, 5-8, 10-11, 16-17, and the glossary for this book. He obtained both of his degrees-a B.A. with a double major in mathematics and English and a minor in computer science and an M.S. in applied mathematics-from the University of Colorado. Vijay Lunawat is a technical specialist with Oracle Corporation; he resides in Orlando, Florida. He has a bachelor's degree in electronics engineering. He has worked with Oracle databases for more than 10 years as a developer, database administrator, consultant, and support. A specialist in Oracle Parallel Server, he's now working with the Center of Expertise in Oracle Support Services. He develops and frequently teaches Oracle Internals classes at Oracle. He was a contributing author for Que's Special Edition Using Oracle8 and wrote Chapters 1, 13, 18, 27, and Appendix A for this book. Meghraj Thakkar works as a senior technical analyst at Oracle Corporation. He has been working with various Oracle products for the past six years. He has a master's in computer science and a bachelor's in electronics engineering. He has several industry vendor certifications, including Microsoft Certified Systems Engineer (MCSE), Novell Certified ECNE, and Lotus Certified Notes Consultant. He has taught several courses at the University of California, Irvine; developed and presented a two-day course, "Supporting Oracle on Windows NT," to internal Oracle employees; and presented two papers at the ECO'98 held in New York City in March 1998. He also coauthored several books for Macmillan Computer Publishing- Special Edition Using Oracle8, Oracle8 Server Unleashed, and Oracle8 for DBAs-and contributed Chapters 2, 4, 14, and 21-23 for this book. Tomas Gasper , the author of Chapters 9, 12, and 20, is an Oracle DBA for Energizer Battery Company in St. Louis, Missouri. He has worked in a variety of system support roles, including DBA, UNIX, and Windows NT administrator and systems programmer. As a refugee from the defense industry, Tomas enjoys learning about exotic and unique computer systems. His hobbies include experimenting with Linux systems, Web-based applications, and, of course, exploring the Internet. Tomas can be reached at [email protected] or [email protected]. Ari Kaplan, the author of Chapter 25, is an independent computer consultant specializing in Oracle product design, development, and management. Ari, coauthor of Que's Special Edition Using Oracle8 and Waite Group Press's Oracle8 How-To, both for Macmillan Computer Publishing, has played pivotal roles in implementing some of the nation's largest and most visible Oracle applications for various industries. Ari worked for Oracle before becoming a consultant in 1994, and since 1990 he has worked as a consultant for several Major League Baseball clubs (currently for the Montreal Expos), where he has developed and managed their scouting department's software systems. Ari graduated from the California Institute of Technology in Pasadena and was granted the school's prestigious "Alumni of the Decade" distinction for his contributions in the computer industry. He has appeared on NBC's Today Show and CNN, and is a frequent guest speaker on Oracle in the United States and abroad. Ari lives in Chicago and can be reached at [email protected] or through his Web site of Oracle tips at http://homepage.interaccess.com/~akaplan. Raman Batra, the author of Chapters 24 and 26, is a database administrator for the Cessna Aircraft Company in Wichita, Kansas, where he has been working since 1994. In the past, Raman has developed client/server and intranet applications using Developer/2000 Forms, Pro*C, and Oracle Web Application Server. As a DBA, he is administering Designer/2000 applications and application suites for engineering, supply chain, and parts distribution systems on Oracle7 and 8. He is also designing an advanced replication architecture on Oracle8 for failover and disaster recovery. Raman is a member of the Oracle Technology Network, ACM, IOUGA, and the Wichita Area Oracle Users Group. Raman can be reached at [email protected]. Joseph Duer, the author of Chapters 15 and 19, is a technical analyst and Oracle database administrator at a technology-driven corporation based in southern Connecticut. He specializes in Web development using Oracle Database Server and Oracle Application Server. He has developed object-oriented systems that utilize C++,

http://www.informit.com/content/0789716534/element_000.shtml (4 of 6) [26.05.2000 16:45:57]

informit.com -- Your Brain is Hungry. InformIT - Introduction From: Using Oracle8

Java, and JavaScript, as well as Oracle Application Server's Java, PL/SQL, and VRML cartridges. He can be reached via email at [email protected] and via his Web page at http://www.netcom.com/~joeduer.

Acknowledgments
From David Austin: Thanks to the many professionals who have helped and encouraged me in my career and my work on this book, particularly my various managers, instructors, and colleagues at Oracle, including Deborah West, Chris Pirie, Nick Evans, Vijay Venkatachalam, Larry Mix, Beth Winslow, Sue Jang, Scott Gossett, and Scott Heisey. I would also like to say thank you to some of my earliest mentors in this business-Bob Klein, Donald Miklich, and Roland Sweet, wherever they might be. Thanks also to the various editors at Que who helped shepherd this work from its inception to the book you now have in your hands, with a special mention for Angela Kozlowski and Susan Dunn. I also want to thank my coauthors, without whose efforts this work could never have been finished. Finally, a thank you to my family for putting up with the long hours I spent ignoring them while working on this project. My wife, Lillian, is now bracing for the revisions, while my kitten is just happy that she once again gets some petting when she sits in my lap. From Vijay Lunawat: Most thanks go to my two children, Siddharth and Sanchi, and my wife, Sushma, for their patience and for putting up with my long and weekend working hours while I was writing for this book. From Meghraj Thakkar: I would like to give a special thanks to my wife, Komal, for her patience and understanding. From Raman Batra: To my lovely wife, Sarika, for her understanding and admirable patience in keeping my daughter, Nikita, away from me, when I was writing. Nikita had a real hard time understanding why Daddy was working with such "boring" text stuff with no music, when she could be watching her Winnie the Pooh CD on Daddy's PC. From Joe Duer: I would like to thank once again the Tuesday night crew at Shelton EMS: Jason, Betty, John, and Denise. Your help covering all the shifts I missed because I was writing is greatly appreciated. I would like to thank everyone at Que-in particular, Angela Kozlowski and Susan Dunn-for their help and guidance during the development of this book.

Tell Us What You Think!


As the reader of this book, you are our most important critic and commentator. We value your opinion and want to know what we're doing right, what we could do better, what areas you'd like to see us publish in, and any other words of wisdom you're willing to pass our way. As the Executive Editor for the Client/Server Database Team at Macmillan Computer Publishing, I welcome your comments. You can fax, email, or write me directly to let me know what you did or didn't like about this book-as well as what we can do to make our books stronger. Please note that I cannot help you with technical problems related to the topic of this book, and that due to the high volume of mail I receive, I might not be able to reply to every message. When you write, please be sure to include this book's title and author as well as your name and phone or fax number. I will carefully review your comments and share them with the author and editors who worked on the book. Fax: E-mail: Mail: 317-817-7070 [email protected] Executive Editor Client/Server Database Team Macmillan Computer Publishing 201 West 103rd Street Indianapolis, IN 46290 USA

http://www.informit.com/content/0789716534/element_000.shtml (5 of 6) [26.05.2000 16:45:57]

informit.com -- Your Brain is Hungry. InformIT - Introduction From: Using Oracle8

Contents

Next >

Save to MyInformIT

Contact Us | Copyright, Terms & Conditions | Privacy Policy | Advertising Copyright 2000 InformIT. All rights reserved.

http://www.informit.com/content/0789716534/element_000.shtml (6 of 6) [26.05.2000 16:45:57]

informit.com -- Your Brain is Hungry. InformIT - Introducing Relational Databases and Oracle8 From: Using Oracle8

You are here : Home : Using Oracle8

Exact Phrase All Words Search Tips

Introducing Relational Databases and Oracle8


< Back Contents Next > Save to MyInformIT From: Using Oracle8 Author: David Austin Publisher: Que More Information

q q

What's a Database Management System? Oracle Database Files


r r r r r

The Initialization Parameter File The Control File The Data File Redo Log Files Archived Redo Log Files Starting and Stopping Instances Oracle Enterprise Manager (OEM) SQL*Plus PL/SQL Net8 Precompilers Developer/2000 Statistics and the Data Dictionary Dynamic Performance Tables

Understanding Database Instances


r

Oracle8's Tools
r r r r r r

The Oracle8 Data Dictionary


r r

q q q q

Functions performed by a database management system Physical architecture of an Oracle database Identify the major components of an Oracle instance Overview of database tools: Oracle Enterprise Manager, SQL*Plus, PL/SQL, Net8, Developer 2000, and precompilers Understand the Oracle8 data dictionary and dynamic performance views

What's a Database Management System?


A database can be defined as a collection of information organized in such a way that it can be retrieved and used. A database management system (DBMS) can further be defined as the tool that enables us to manage and
http://www.informit.com/content/0789716534/element_001.shtml (1 of 12) [26.05.2000 16:46:05]

informit.com -- Your Brain is Hungry. InformIT - Introducing Relational Databases and Oracle8 From: Using Oracle8

interact with the database. Most DBMSs perform the following functions: q Store data q Create and maintain data structures q Allow concurrent access to many users q Enforce security and privacy q Allow extraction and manipulation of stored data q Enable data entry and data loading q Provide an efficient indexing mechanism for fast extraction of selected data q Provide consistency among different records q Protect stored data from loss by backup and recovery process Several different types of DBMSs have been developed to support these requirements. These systems can broadly be classified in the following classes: q A hierarchical DBMS stores data in a tree-like structure. It assumes a parent-child relationship between the data. The top of the tree, known as the root, can have any number of dependents. Dependents, in turn, can have any number of subdependents, and so on. Hierarchical database systems are now obsolete. q A network DBMS stores data in the form of records and links. This system allows more flexible many-to-many relationship than do hierarchical DBMSs. Network DBMSs are very fast and storage-efficient. Network database management systems allowed complex data structures but were very inflexible and required tedious design. An airline reservation system is one example of this type of DBMS system. q Relational DBMSs (RDBMSs) probably have the simplest structure a database can have. In an RDBMS, data is organized in tables. Tables, in turn, consist of records, and records of fields. Each field corresponds to one data item. Two or more tables can be linked (joined) if they have one or more fields in common. RDBMSs are easy to use and have flourished in the last decade. They're commonly used on low-end computer systems. In the last few years, however, their use has expanded to more powerful computer systems. Oracle, Informix, and Sybase are some popular RDBMSs available in the market. Oracle8 stores objects in relational tables Oracle8 is an object relational database management system, which allows objects to be stored in tables, in a manner similar to numbers and words being stored in an RDBMS system.
q

Object-oriented DBMSs were designed to handle data such as numbers and words. During recent years, however, object-oriented DBMSs are emerging. These systems can handle objects such as videos, images, pictures, and so on.

Oracle Database Files


An Oracle database physically resides in various files. Figure 1.1 shows the physical structure of an Oracle database. Figure 1.1 : An Oracle database system.

The Initialization Parameter File


The parameter file, commonly known as INIT.ORA, contains initialization parameters that control the behavior and characteristics of the database and the instance that accesses the database. You can edit this text file in your favorite editor. Changing the parameter file The initialization parameter file is read by the instance only during startup. Any changes made in the initialization file take effect only after you shut down and restart the instance. Oracle supplies a sample INIT.ORA file in the $ORACLE_HOME/dbs directory. $ORACLE_HOME is the top-level directory under which Oracle software is installed; it doesn't need to be the user Oracle's home directory. The default name of an instance's parameter file is initSID.ora, in which SID (System IDentifier) is a character string that uniquely identifies the instance on the system. You can override the defaults by using the PFILE parameter of the Server Manager's startup command. The
http://www.informit.com/content/0789716534/element_001.shtml (2 of 12) [26.05.2000 16:46:05]

informit.com -- Your Brain is Hungry. InformIT - Introducing Relational Databases and Oracle8 From: Using Oracle8

IFILE parameter in this file allows you to nest multiple initialization files for the same instance.

The Control File


The control file contains information about the database's physical structure and status. It has information about several things: the total number of data files; log files; redo log groups; redo log members; current redo log to which the database is writing; name and location of each data file and online redo log files; archived log history; and so on. Starting from Oracle8, the control file also contains information about the backup of the database. Oracle updates the control file Oracle automatically records any structural changes in the database- for example, addition/deletion of a data file-in the control file(s). An Oracle instance updates the control file(s) with various status information during its operation. The control_file initialization parameter specifies the name and location of a database's control file. It's strongly recommended that you specify multiple files in the control_file initialization parameter to mirror the control file to multiple locations. The V$CONTROL_FILE data dictionary view contains information about the database's control file. The V$CONTROLFILE_RECORD_SECTION dynamic performance view contains detailed structure information about the control file. This view gives the information about all records contained in the control file.

The Data File


An Oracle database stores user information in physical data files. A data file can contain tables, indexes, clusters, sequences, data dictionary, rollback segments, temporary segments, and so on. At the logical level, Oracle manages space in terms of tablespace (a group of one or more data files). When an Oracle database is created, it has only one tablespace: SYSTEM. Other tablespaces and the associated data files are added later, as needed. You can specify the name, location, and size of a data file while creating the tablespace to which the data file belongs. Oracle uses control files to store the name and location of the data files. Use the data dictionary views V$DATAFILE and DBA_DATA_FILES to retrieve the information about a database's data files.

Redo Log Files


Oracle records all changes against the database in the redo log file and uses the contents of the redo log file to regenerate the transaction changes in case of failure. An Oracle database has two or more redo log files. Oracle allows you to mirror the redo log files, thus a redo log group contains one or more files (members). Oracle writes to all the members of a redo log group simultaneously. An Oracle instance writes to redo log groups in cyclical order-that is, it writes to one redo log group and then to the next when the earlier one is filled up. When the last available redo log group is filled, it switches over to the first one. SEE ALSO For detailed information about redo log files, You can specify the name, location, and size of the redo log files during database creation. The V$LOGFILE data dictionary view contains redo log files' information. You can also add, delete, and relocate redo log files by using the ALTER DATABASE command.

Archived Redo Log Files


The archived log file contains a copy of the redo log file. Archived redo log files are useful in recovering the database and all committed transactions in case of failures (such as disk failure). When an Oracle database is operating in archive log mode, it needs to archive the recently filled redo log file before it can reuse it. You can enable automatic archiving by setting the initialization parameter LOG_ARCHIVE_START to TRUE or by issuing the archive log start command after the instance startup. When automatic archiving is enabled, the ARCH (archiver) process copies the filled redo log files to the directory specified by LOG_ARCHIVE_DEST. The LOG_ARCHIVE_FORMAT parameter defines the default names of archived log files.

Understanding Database Instances


An Oracle database stores data in physical data files and allows controlled user-access to these files through a

http://www.informit.com/content/0789716534/element_001.shtml (3 of 12) [26.05.2000 16:46:05]

informit.com -- Your Brain is Hungry. InformIT - Introducing Relational Databases and Oracle8 From: Using Oracle8

set of operating system processes. These processes are started during the instance startup. Because they work silently, without direct user interaction, they're known as background processes. To enable efficient data manipulation and communication among the various processes, Oracle uses shared memory, known as Shared Global Area (SGA). These background processes and the shared memory segment together are referred as an Oracle instance. In a parallel server environment, a database can be accessed by multiple instances running on different machines. An Oracle instance consists of the following background processes: q The process monitor process (PMON). This background process cleans up after a user process terminates abnormally. It rolls back the uncommitted transaction left behind and releases the resources locked by the process that no longer exists. Oracle background processes that are always started The LMON, PMON, DBWR, and LGWR processes are always present for an instance. Other processes are started by setting up a related initialization parameter.
q

The database writer process (DBWR). To ensure efficient and concurrent data manipulation, Oracle doesn't allow a user process to directly modify a data block on the disk. The blocks that need to be modified or in which the data is inserted are first fetched in a common pool of buffers, known as buffer cache. These blocks are then written to the disk in batches by the DBWR background process. Thus, DBWR is the only process with write access to the Oracle data files. The log writer process (LGWR). Whenever an Oracle process modifies an Oracle data block, it also writes these changes to the redo log buffers. It's the responsibility of the LGWR process to write the redo log buffers to the online redo log file. This process reads the contents of the redo log buffers in batches and writes them to the online redo log file in sequential fashion. Note that LGWR is the only process writing to the online redo log files. Oracle's transaction commit algorithm ensures that the contents of redo log buffers are flushed to the online redo log file whenever a transaction is committed. The system monitor process (SMON). This background process does operations such as freeing up the sort space and coalescing the adjacent free extents in one big extent. SMON is also responsible for performing transaction recovery during the instance recovery (during instance startup after a crash or shutdown abort). In a parallel server environment, it also detects and performs instance recovery for another failed instance. The archiver process (ARCH). This process is started when the database is in archive log mode and automatic archiving is enabled. It copies the recently filled online redo log file to an assigned backup destination. The checkpoint process (CKPT). During a checkpoint, the DBWR process writes all the modified blocks to disk. DBWR also tells LGWR to update the header information of all data files with the checkpoint information. Because a database containing a larger amount of data files might be a time-consuming task for LGWR, the CKPT process starts during instance startup to help LGWR update the file headers during the checkpoint. This process is started only when the CHECKPOINT_PROCESS parameter is set to TRUE or when the number of data files in the database is more than a certain number. The recoverer process (RECO). This process is responsible for recovering the in-doubt transaction in a distributed database environment. This process is started only when the initialization parameter DISTRIBUTED_TRANSACTION is set to greater than 0. The parallel query slave processes (pxxx). Under favorable conditions, Oracle can reduce the execution time for certain SQL operations by dividing the operation among several dedicated processes. The processes used for parallel execution of SQL statements are known as parallel query slaves. The snapshot process (SNPn). The snapshot or the job queue processes are started when the parameter JOB_QUEUE_PROCESSES is set more than 0. These processes execute jobs in the job queue, refresh any snapshot that's configured for automatic refresh, and so on. The dispatcher process (Dxxx). Oracle supports multithreaded servers on some operating systems. When enabled, these processes receive the user request and put it in the request queues for execution. They also collect the results of the execution from the dispatcher queues and pass them back to users. The shared server process (Sxxx). In a multithreaded server environment, the dedicated server process executes the SQL operations from the request queues and puts back the results in the corresponding dispatcher queue.

If you're running Oracle's parallel server option, you also see the following background processes on each instance: q The lock process (Lckn). This Oracle parallel server process coordinates all the lock requests from the local and remote instances. It communicates with user processes and the lock daemon process.

http://www.informit.com/content/0789716534/element_001.shtml (4 of 12) [26.05.2000 16:46:05]

informit.com -- Your Brain is Hungry. InformIT - Introducing Relational Databases and Oracle8 From: Using Oracle8
q

The lock monitor process (LMON). This Oracle parallel server process is responsible for reconfiguring the Integrated Distributed Lock Manager (IDLM) during instance startup and shutdown in an OPS environment. It also performs lock cleanup after abnormal death of a user process. The lock daemon process (LMD0). This Oracle parallel server process is part of the IDLM. It handles all lock requests from remote instances for the locks held by a local instance.

Figure 1.2 shows the components of an Oracle instance: the SGA and background processes. Figure 1.2 : An Oracle instance consists of the SGA and background processes.

Starting and Stopping Instances


An Oracle database isn't accessible to users until it's opened by an Oracle instance. In an Oracle parallel server environment, an Oracle database is accessed by more than one instance. Each instance has its own set of background processes and the SGA. An instance startup operation involves starting all the background processes and allocating the shared memory area (see Figure 1.3). Who can start up and shut down an Oracle instance? An instance startup operation can be done only by users with the requisite OS privileges or who have been assigned an OSOPER or OSDBA role. Figure 1.3 : Oracle instance startup consists of three steps. How Oracle starts instances 1. Oracle starts all the background processes and allocates the SGA. Oracle reads the initialization parameter file during this step. 2. Oracle reads the control file and associates the control with the instances. It detects the conditions of the database from the last shutdown/crash. 3. Oracle reads the file headers of all the data files and redo log files. It ensures consistency among all data files. If the instance is being started after a crash or shutdown abort, Oracle also applies all the redo log since the last successful checkpoint. Oracle database is accessible to users after completing this step. If the instance is started after a crash or shutdown abort, Oracle needs to perform rollback operations for the uncommitted transaction. This operation is performed by SMON in the background while the database is open and available for use. An Oracle instance shutdown closes the database, dismounts it, and then removes the SGA and the background processes. Shutdown offers three modes: normal, immediate, and abort. Shutdown normal and shutdown immediate are used most often, whereas shutdown abort should be used with caution. During shutdown normal, Oracle waits for all users to disconnect, writes all modified data to the data files, and then updates files headers, online redo log files, and control files. Shutdown immediate disconnects all users and then proceeds similarly to shutdown immediate. Shutdown abort just removes all the background processes and the SGA; all cleanup work is done during the next startup. Table 1.1 lists Server Manager commands to start and stop an Oracle instance. Table 1.1 Server Manager startup and shutdown commands Command: startup and startup pfile=file startup nomount startup mount alter database mount alter database open shutdown shutdown immediate Description: Uses the default parameter file to start startup open the instance, mount the database, and open it Starts the instance by using the specified parameter file Allocates the SGA and starts the background processes; doesn't mount and open the database Allocates the SGA, starts the background process, and mounts the database; doesn't open the database Mounts the database after the instance is started with startup nomount command Opens the database after it's mounted by the startup mount command Closes the instance after all users disconnect (normal shutdown) Doesn't allow any new transactions to start; rolls back uncommitted transactions and closes the instance

http://www.informit.com/content/0789716534/element_001.shtml (5 of 12) [26.05.2000 16:46:05]

informit.com -- Your Brain is Hungry. InformIT - Introducing Relational Databases and Oracle8 From: Using Oracle8

shutdown abort

Immediately removes the SGA and the background processes

SEE ALSO To learn how to start and stop database instances with Oracle Enterprise Manager,

Oracle8's Tools
Oracle provides various tools for application development and for performing administrative functions: q Oracle Enterprise Manager (OEM) q SQL*Plus q PL/SQL q Net8 q Developer 2000 q Precompilers

Oracle Enterprise Manager (OEM)


Oracle Enterprise Manager is a graphical system management tool that allows you to perform multiple tasks in a complicated database environment. OEM comes with several components. Some components, such as Oracle Expert and Performance Manager, are priced separately. Chapter 4 "Managing with Oracle Enterprise Manager (OEM)," explains how to use these components. OEM's major components are as follows (see Figure 1.4): Figure 1.4 : Oracle Enterprise Manager consists of several modules. Backup Manager Security Manager Data Manager Storage Manager Instance Manager Schema Manager SQL Worksheet
q

Software Manager Oracle Expert Lock Manager TopSession Monitor Performance Manager Tablespace Manager

Backup Manager lets you perform backup and recovery operations associated with the database. It lets you interface with Oracle8's advanced backup and recovery utility, the Recovery Manager. Data Manager, a data transfer and loading tool, lets you invoke export, import, and load utilities. Use the Export utility to extract data in Oracle's operating system-independent format. The exported data can be loaded in another Oracle database or later in the same database. You also can use Export as the database's logical backup. The Loader utility is used to insert data in the Oracle database from text files. Instance Manager lets you manage instances, user sessions, and in-doubt transactions. It lets you start and shut down Oracle instances. You can manage multiple instances in an Oracle parallel server environment. It also lets you manage the initialization parameter file used during instance startup. Lock Manager lets you view the locks held in an instance. It's a helpful tool for analyzing hung sessions and other, similar situations. Oracle Expert lets you tune instance and database performance. It generates a listing of recommendations that can be implemented automatically to improve the performance. Performance Manager lets you monitor an Oracle instance performance. It provides you with graphical representation of various performance statistics. Schema Manager lets you perform Data Definition Language (DDL) operations, which let you create, alter, drop, and view database objects such as tables, indexes, clusters, triggers, and sequences. Security Manager lets you perform user-management tasks such as adding, altering, and dropping users, roles, and profiles. Software Manager allows you to administer in an distributed environment and to automate database administration tasks. SQL Worksheet behaves mostly similar to a SQL*Plus session. You can use it to enter and execute SQL

http://www.informit.com/content/0789716534/element_001.shtml (6 of 12) [26.05.2000 16:46:05]

informit.com -- Your Brain is Hungry. InformIT - Introducing Relational Databases and Oracle8 From: Using Oracle8

commands. Storage Manager lets you perform database space-management tasks such as creating, altering, and dropping tablespaces. It also lets you create online/offline and drop rollback segments. Tablespace Manager lets you view the space usage within a tablespace at object level. You can also get information about used and free space within the database. TopSession Monitor lets you monitor active user sessions and view user resource utilization. This information can be used to address slow performance. Oracle Trace lets you drill down the execution of SQL statements to improve performance of the system.

SQL*Plus
The only interface available between end users and an RDBMS is Structured Query Language (SQL). All other applications and tools that users utilize to interact with the RDBMS act as translators/interpreters. These tools generate SQL commands based on a user's request and pass the generated SQL commands on to the RDBMS. SQL*Plus can't start or stop an instance A database administrator can't start and shut down an Oracle instance by using SQL*Plus. SQL*Plus, Oracle's version of SQL, is one of the most commonly used Oracle tools. SQL*Plus enables users to instruct the Oracle instance to perform the following SQL functions: q Data definition or DDL operations, such as creating, altering, and dropping database objects q Data query to select or retrieve the stored data q Data manipulation or the DML operations to insert, update, and delete data q Access and transfer data between the databases q Allow user to enter data interactively q DBA functions or the database administrative tasks such as managing users (creating, altering, and dropping users), managing space (creating, altering, and dropping tablespaces), and backup and recovery In addition to these basic SQL functions, SQL*Plus also provides several editing and formatting functions that enable users to print query results in report format. Setting Up the SQL*Plus Environment SQL*Plus has many advanced functions that you can use to present data in a visually pleasing format. You can set various environment variables in order to control the way SQL*Plus outputs a query. Table 1.2 lists some of the most common commands to set up the environment, which you can enter at the SQLPLUS> prompt. Table 1.2 SQL*Plus environment commands Command: set set set set set set pagesize linesize newpage pause array feedback Description: Sets the number of lines per page Sets the number of characters in a line Sets the number of blank lines between pages Causes SQL*Plus to pause before each page Sets the number of rows retrieved at a time Displays the number of records processed by a query Prints a heading at the beginning of the report Allows output from DBMS_OUTPUT.PUT_LINE stored procedure to be displayed Displays timing statistics Allows you to suppress output generated by a command executed from a file

set heading set serveroutput set time set term

Set up the environment automatically You also can use the LOGIN.SQL and GLOGIN.SQL files to set up the environment for the current session while invoking SQL*Plus.

http://www.informit.com/content/0789716534/element_001.shtml (7 of 12) [26.05.2000 16:46:05]

informit.com -- Your Brain is Hungry. InformIT - Introducing Relational Databases and Oracle8 From: Using Oracle8

PL/SQL
PL/SQL stands for Procedural Language/Structured Query Language. It allows a user to utilize structured programming constructs similar to third-generation languages such as C, Fortran, and COBOL. PL/SQL enhances SQL by adding the following capabilities: PL/SQL is embedded in Oracle8 tools Although you can use PL/SQL as a programming language, it's also available as part of Oracle tools such as Oracle Forms and Oracle Reports. The PL/SQL engine embedded in these tools acts as the preprocessor.
q q q q q q

Define and use variables Control the program flow (IF, IF...THEN...ELSE, and FOR LOOP constructs) Use of cursors and arrays File I/O Functions and procedures PL/SQL tables to move larger amount of data

With PL/SQL, you can use SQL commands to manipulate data in an Oracle database and also use structured programming constructs to process the data.

Net8
Net8, formerly known as SQL*Net, is Oracle's networking interface. It allows communication between various Oracle products residing on different machines. It enables communication among client, server, and Oracle databases in a distributed environment. At the client end, the client application code passes messages on to the Net8 residing locally, and the local Net8 transfers messages to the remote Net8 via the underlying transport protocol. These messages are received by Net8 at the server, which sends them to the database server for execution. The server executes the request and responds to the client following the same path. Figure 1.5 shows the communication between client and server using Net8. Figure 1.5 : The client and the server communicate with each other through Net8. Net8 has many enhancements over its predecessor SQL*Net, such as connection pooling, multiplexing, listener load balancing, and caching the network addresses at the client end. Net8 is backward-compatible and can coexist with SQL*Net version 2.

Precompilers
A third-generation language compiler doesn't recognize the SQL needed to interface with the RDBMS. Therefore, if you need power and flexibility of a language such as C, C++, Fortran, or COBOL and also want it to interface with the Oracle8 RDBMS, you need a tool that can convert the SQL statements to the calls that a language compiler can understand. As Figure 1.6 shows, a precompiler program reads structured source code and generates a source file that a language compiler can process. Oracle provides several precompilers, such as Pro*C, Pro*Cobol, Pro*Fortran, and Pro*Pascal. Figure 1.6 : You develop programs by using a precompiler. You might want to use precompilers to get better performance while developing long-running batch programs and time-critical programs. You can do the following by using precompilers: q Use dynamic SQL. q Better control cursor and program flow. q Develop program libraries and use them in multiple applications. q Concurrently access the data from multiple databases. q Write multithreaded applications by forking processes.

Developer/2000
Developer/2000 provides the complete set of tools to develop applications that access an Oracle database. It consists of tools for creating forms, reports, charts, queries, and procedures. It also enables you to deploy existing and new applications on the Web. Developer/2000 consists of the following component tools: q Project Builder. You can track and control application documents, source code files, charts, forms,

http://www.informit.com/content/0789716534/element_001.shtml (8 of 12) [26.05.2000 16:46:05]

informit.com -- Your Brain is Hungry. InformIT - Introducing Relational Databases and Oracle8 From: Using Oracle8

reports, queries, and so on. Forms Builder. Forms are one of the easiest and most popular means for end users to interact with the database. End users totally unfamiliar with the database and the SQL language can easily learn a forms-based application to access the Oracle database. Forms Builder is the tool application developers can use to develop forms. Report Builder. Although forms give users online/immediate interaction with the database, the Report Builder allows users to queue the data extraction request in a queue at a remote report server. The report server, in turn, interacts with the database to generate the reports. You can also embed the reports in other online tools such as Web browsers, graphics, and forms. Graphics Builder. Graphical and visual representation of the data is much more effective than raw data. You can use Graphics Builder to produce interactive graphical displays. Graphics Builder also allows you to include graphs in forms and reports. Query Builder. Query Builder allows you to interact with database tables in tabular form onscreen. The tables involved in the query are available onscreen, and users can construct the desired query by pointing and clicking. Formatted query results are also displayed. Schema Builder. Schema Builder is a graphical DDL (data definition language) tool. You can use it to create, alter, and drop database objects such as tables, indexes, clusters, and sequences. Procedure Builder. The Procedure Builder helps you build procedures interactively. You can use its graphical interface to create, edit, debug, and compile PL/SQL programs. The PL/SQL programs unit, which can be generated with the Procedure Builder, includes packages, triggers, functions, and program libraries. Translation Builder. The Translation Builder allows you to extract translatable strings from Oracle or non-Oracle resources and perform the desired transaction. For example, you can translate Microsoft Windows (.RC) and HTML files to Oracle resource files.

Traditionally, Developer/2000 supported the client/server architecture, where the client tools and the application reside on one machine (usually the end-user PC) and the database server resides on another machine. With the proliferation of the Web, however, Oracle has introduced a three-tier ar rchitecture in which an additional server that runs the application code has been introduced. Client/server, or the three-tier, architecture for installing Developer/2000 is highly recommended because the workload is distributed among the client, database server, and application servers in this structure. In addition, the application, Developer/2000, and the database software are independent of each other, thus making maintenance easier. SQL*Net or Net8 needs to be installed on the client and the database server to enable the connectivity between the two.

The Oracle8 Data Dictionary


Oracle stores information about all the objects defined by the users, structural information about the database, and so on in its internal tables. These Oracle internal tables and associated objects are collectively referred as the data dictionary. The data dictionary is owned by the user SYS and always resides in the SYSTEM tablespace. Data dictionary tables are created when the database is created Oracle automatically updates these tables whenever it needs to. Users should never update any table in the data dictionary. Several Oracle and non-Oracle tools also create some objects in the data dictionary that are used for storing operational, reference, and configuration information. Information stored in the data dictionary is available to users through data dictionary views. A database administrator or a user can use the data dictionary to view the following information: q Definitions of the database objects such as tables, partitions, indexes, clusters, views, snapshots, triggers, packages, procedures, functions, sequences, and synonyms q Users defined in the database q Storage allocation for the objects in the database and quota assigned to each user q Integrity constraints q Database links q Privileges and roles q Replicated objects, snapshots, and their refresh characteristics q Auditing information such as access patterns for various objects
http://www.informit.com/content/0789716534/element_001.shtml (9 of 12) [26.05.2000 16:46:05]

informit.com -- Your Brain is Hungry. InformIT - Introducing Relational Databases and Oracle8 From: Using Oracle8
q q q q q q

Jobs in the job queue Locks and latches held in the database Alerts and table queues (advance queues) Rollback segments SQL*Loader direct load NLS settings

Oracle's data dictionary views can broadly be defined in the following classes: q Views with the DBA prefix. These views contain information about the entire database. For example, the view DBA_TABLES gives information about all tables in the database. By default, these views are accessible only to users with the DBA role. q Views with the USER prefix. USER views contain information about the objects owned by the user. For example, USER_TABLES gives information about the tables owned by the user. q Views with the ALL prefix. These views contain information about all objects accessible to the user. Objects accessible to a user include the objects created by the user plus the objects on which he has received grants from other users. For example, the ALL_TABLES view contains information about all tables accessible to a user. Table 1.3 lists important Oracle8 data dictionary views. Similar views with DBA and ALL prefixes are available. Table 1.3 Important data dictionary views View Name: USER_ALL_TABLES USER_CLUSTERS USER_CONSTRAINTS USER_DB_LINKS USER_ERRORS USER_EXTENTS USER_FREE_SPACE USER_INDEXES USER_IND_COLUMNS USER_JOBS USER_RESOURCE_LIMITS USER_SEGMENTS USER_SEQUENCES USER_SNAPSHOTS USER_SYNONYMS USER_TAB_COLUMNS USER_TAB_PARTITIONS USER_TABLES USER_TRIGGERS Description: Contains descriptions of all tables available to the user Contains information about clusters created by the user Contains information about the constraint defined by the user Contains information about the database link created by the user Gives all current errors on all stored objects for the user Lists all the extents used by the objects owned by the user Lists all free extents in the tablespaces on which the user has privilege Gives information about indexes created by the user Gives the name of all the columns on which the user has created indexes Gives all jobs in the job queue owned by the user Gives resource limits applicable for the user Gives information about all segments owned by the user Lists information about all sequences owned by the user Gives information about all snapshots the user can view Gives the name of all private synonyms for the user Gives the name of all columns in all tables the user owns Gives information about all table partitions owned by the user Gives information about all tables the user owns Gives information for all triggers created by the user

Statistics and the Data Dictionary


Several data dictionary views contain columns with statistics information for the object. For example, the USER_TABLES view contains columns NUM_ROWS (number of rows in the table), BLOCKS (number of data blocks used in the table), AVG_ROW_LEN (average row length of a row in the table), and so on. These columns are populated only when you analyze the object by using the ANALYZE command. You should analyze the objects at regular intervals to keep the statistics up-to-date.

Dynamic Performance Tables


An Oracle instance maintains comprehensive information about its current configuration and activity. These

http://www.informit.com/content/0789716534/element_001.shtml (10 of 12) [26.05.2000 16:46:05]

informit.com -- Your Brain is Hungry. InformIT - Introducing Relational Databases and Oracle8 From: Using Oracle8

statistics are accessible to the database administrator through dynamic performance views. Most of these views are based on in-memory table-like structures known as virtual tables (because they aren't real tables). The majority of these views have names starting with V$. These virtual tables don't require disk storage space and aren't stored in any tablespace. By default, the dynamic performance views are accessible to the SYS user or to the users having a SYSDBA role. Contents of these views are updated continuously while the instance is active. Use TIMED_STATISTICS to gather timing information Many dynamic performance views contain columns, such as WAIT_TIME and TOTAL_WAITS, that contain timing information. Such columns are populated by Oracle only when the TIMED_STATISTICS parameter is set to TRUE. Table 1.4 describes important dynamic performance views. These views are for Oracle8; some may not exist in Oracle7. Table 1.4 Dynamic performance views View Name: V$ACCESS V$CONTROLFILE V$DATABASE V$DATAFILE V$DATAFILE_HEADER V$DB_LINK V$FILESTAT V$FIXED_TABLE Description: Displays information about locked database objects and the sessions accessing them Lists names of the database control files Contains miscellaneous database information such as database name creation date, archive/no archive log mode, and so on Contains information about the data files that are part of the database (This information is from the control file.) Similar to V$DATAFILE, except that information is based on the contents of each data file header Lists information about all active database links Displays read/write statistics for each database data file Contains names of all fixed tables in the database

V$FIXED_VIEW_DEFINITION Lists definitions of all the dynamic performance views; you can see how Oracle creates dynamic performance views based on its internal x$ tables; these x$ tables are known as fixed tables Lists license-related information V$LICENSE Shows the locks held and requested; information in this view useful V$LOCK while tuning the database performance or hanging issues Lists all the objects locked in the database and the sessions that are V$LOCKED_OBJECT locking the objects Lists information about the online redo logs V$LOG Contains information about the archived redo log file V$LOG_HISTORY Lists statistics about the current session V$MYSTAT Lists current values of the initialization parameters; the V$PARAMETER ISDEFAULT column indicates whether the parameter value is the default V$PROCESS Lists all Oracle processes; a value of 1 in the BACKGROUND column indicates that the process is an Oracle background process; a NULL value in this column indicates a normal user process Used to query the information about the files needing media recovery; this view can be queried after the instance mounts the database Lists names of all the online rollback segments Lists statistics for all online rollback segments Contains information about all the current sessions; this view, one of the most informative, has about 35 columns Contains information about waits each session has incurred on events; use this view if you're experiencing slow performance

V$RECOVER_FILE

V$ROLLNAME V$ROLLSTAT V$SESSION V$SESSION_EVENT

http://www.informit.com/content/0789716534/element_001.shtml (11 of 12) [26.05.2000 16:46:06]

informit.com -- Your Brain is Hungry. InformIT - Introducing Relational Databases and Oracle8 From: Using Oracle8

V$SESSION_WAIT V$SESSTAT V$SESS_IO V$STATNAME V$SYSSTAT V$SYSTEM_EVENT V$TABLESPACE V$TRANSACTION V$WAITSTAT

Lists the events and resources Oracle is waiting on; information in this view can be used to detect performance bottlenecks Contains performance statistics for each active session Lists I/O statistics about each active session Gives names of Oracle statistics displayed in V$SESSTAT and V$SYSSTAT Contains performance statistics for the whole instance Contains information for various Oracle events Lists names of all tablespaces in the database Lists statistics related to transactions in the instance Contains block contention statistics

Global dynamic performance views In a parallel server environment, every V$ view has a corresponding GV$ view. These views, known as global dynamic performance views, contain information about all active instances of an Oracle parallel server environment. The INST_ID column displays the instance number to which the information displayed in the GV$ view belongs. Use fixed tables with caution! Oracle doesn't encourage the use of fixed tables listed in V$FIXED_TABLE because their structure isn't published and can be changed. < Back Contents Next >

Save to MyInformIT

Contact Us | Copyright, Terms & Conditions | Privacy Policy | Advertising Copyright 2000 InformIT. All rights reserved.

http://www.informit.com/content/0789716534/element_001.shtml (12 of 12) [26.05.2000 16:46:06]

informit.com -- Your Brain is Hungry. InformIT - Creating a Database From: Using Oracle8

You are here : Home : Using Oracle8

Creating a Database
Exact Phrase All Words Search Tips

< Back

Contents

Next >

Save to MyInformIT

From: Using Oracle8 Author: David Austin Publisher: Que More Information

q q q

Prerequisites for Creating a Database Choosing Initialization Parameters for your Database Getting Ready to Create a Database
r r r r

Organizing the Database Contents Designing a Database Structure to Reduce Contention and Fragmentation Decide on the Database Character Set Start the Instance Using the Oracle Installer (ORAINST) to Create a Database Using the CREATE DATABASE Command

Choosing the Method for Creating the Database


r r

q q

Creating a Database from the Seed Database Checking the Status of your Database
r r

Examining Basic Views Checking the Oracle Alert Log

q q q q q

Create a new database Create an Oracle service for Windows NT Run optional data dictionary scripts Understand the initialization parameters Use the alert log

Prerequisites for Creating a Database


Before you can create an Oracle database, you need to configure the kernel for shared memory. In UNIX, shmmax needs to be set properly to allow for the total SGA set in the initialization file. In Windows NT, make sure that virtual memory isn't more than twice the physical memory on the system. You also need to make certain decisions regarding how the database will be used and configured. These decisions should include the following: q Database sizing. Proper sizing of the database files can help you choose your initialization parameters. It will result in an optimally tuned database to start with. q Changing passwords for SYS and SYSTEM. These passwords will be used to log onto the database and perform administrative tasks. This can be done after database creation. The passwords of the SYS and the SYSTEM users should be changed as soon as possible. Use the ALTER USER command to change these passwords. It's recommended that you create a new account with SYSDBA privileges and use that account to create and own the database. Protect the password for SYS

http://www.informit.com/content/0789716534/element_002.shtml (1 of 13) [26.05.2000 16:46:14]

informit.com -- Your Brain is Hungry. InformIT - Creating a Database From: Using Oracle8

Because SYS is the owner of the data dictionary, you should protect that password. Allowing the pass-word for SYS to get into the wrong hands can lead to tremendous damage to the database to the point that all data can be lost. The default password for SYS is CHANGE_ ON_INSTALL, whereas the default password for SYSTEM is MANAGER.
q

MTS configuration. The multithreaded server (MTS) pools connections and doesn't allocate a single thread per connection. As a result, it avoids stack overflow and memory allocation errors that would occur from a dedicated connection per thread. A multithreaded server configuration allows many user threads to share very few server threads. The user threads connect to a dispatcher process, which routes client requests to the next available server thread, thereby supporting more users. (You can configure MTS after database creation.) Setting the environment variables. Setting the ORACLE_SID, ORACLE_HOME, and path variables to the correct values will allow you to start the correct instance.

Prepare the operating system environment for database creation The operating system memory parameters should also be set properly. Check your operating system-specific documentation for the parameters to set. Prepare for and create a database (general steps) 1. Create the initSID.ora parameter file. 2. Create the configSID.ora file. 3. Create the database script crdbSID.ora. 4. Create the database. 5. Add rollback segments. 6. Create database objects for tools. The DBA's operating system privileges Your database administrator login should have administrator privileges on the operating system to be able to create a data-base.

Choosing Initialization Parameters for your Database


The instance for the Oracle database is started by using a parameter file (initSID.ora) that should be customized for the database. You can use the operating system to create this file by making a copy of the one provided by Oracle on the distribution media, or by using the init.ora file from the seed database (if installed) as a template. Rename this file as initSID.ora (for example, for the SID ABCD, the name of the initialization file would be initABCD.ora), and then edit it to customize it for your database. Change these parameters from their default values Most people make the mistake of leaving the initialization parameters to their default value. These default values aren't ideal for most systems. You need to carefully choose the initialization parameters with your data-base environment in mind. The parameter file is read-only at instance startup. If it's modified, you need to shut down and restart the instance for the new values to take effect. You can edit the parameter file with any operating system editor. Most parameters have a default value, but some parameters need to be modified with uniqueness and performance in mind. Table 2.1 lists parameters that should be specified. Table 2.1 Initialization parameters that you should modify Parameter: DB_NAME Description: Database identifier (maximum of eight characters). To change the name of an existing database, use the CREATE CONTROLFILE statement to recreate your control file(s) and specify a new database name. The network domain where the database is created. Names of the control files. If you don't change this parameter, the control files of other databases can be overwritten by the new instance, making the other instances unusable. Size in bytes of Oracle database blocks.

DB_DOMAIN CONTROL_FILES

DB_BLOCK_SIZE

http://www.informit.com/content/0789716534/element_002.shtml (2 of 13) [26.05.2000 16:46:14]

informit.com -- Your Brain is Hungry. InformIT - Creating a Database From: Using Oracle8

SHARED_POOL_SIZE BACKGROUND_DUMP_DEST USER_DUMP_DEST DB_BLOCK_BUFFERS COMPATIBLE IFILE MAX_DUMP_FILE_SIZE PROCESSES ROLLBACK_SEGMENTS

Size in bytes of the shared pool. Location where background trace files will be placed. Location where user trace files will be placed. Number of buffers in the buffer cache. Version of the server that this instance is compatible with. Name of another parameter file included for startup. Maximum size in OS blocks of the trace files. Maximum number of OS processes that can simultaneously connect to this instance. Rollback segments allocated to this instance. Refer to the Oracle8 tuning manual for information and guidelines on determining the number and size of rollback segments based on the anticipated number of concurrent transactions. Number of bytes allocated to the redo log buffer in the SGA. Enable or disable automatic archiving if the database is in ARCHIVELOG mode. Default filename format used for archived logs. Location of archived redo log files. Maximum number of users created in the database.

LOG_BUFFER LOG_ARCHIVE_START LOG_ARCHIVE_FORMAT LOG_ARCHIVE_DEST LICENSE_MAX_USERS LICENSE_MAX_SESSIONS

Maximum number of concurrent sessions for the instance. LICENSE_SESSIONS_WARNING Warning limit on the concurrent sessions. Database names should be unique Attempting to mount two databases with the same name will give you the error ORA-01102: cannot mount database in EXCLUSIVE mode during the second mount. Setting the parameters The ideal values for these parameters are application dependent and are discussed in more detail in Chapter 21, "Identifying and Reducing Contention," and Chapter 22, "Tuning for Different Types of Applications." Setting these values is based on trial and error. For DSS systems, it's recommended that you choose a large value for these parameters; for OLTP systems, choose a small value for these parameters. The following is a sample init.ora file: db_name = SJR db_files = 1020 control_files = (E:\ORANT\database\ctl1SJR.ora, E:\ORANT\database\ctl2SJR.ora) db_file_multiblock_read_count = 16 db_block_buffers = 550 shared_pool_size = 9000000 log_checkpoint_interval = 8000 processes = 100 dml_locks = 200 log_buffer = 32768 sequence_cache_entries = 30 sequence_cache_hash_buckets = 23 #audit_trail = true #timed_statistics = true background_dump_dest = E:\ORANT\rdbms80\trace user_dump_dest = E:\ORANT\rdbms80\trace db_block_size = 2048 compatible = 8.0.3.0.0 sort_area_size = 65536 log_checkpoint_timeout = 0 remote_login_passwordfile = shared
http://www.informit.com/content/0789716534/element_002.shtml (3 of 13) [26.05.2000 16:46:14]

informit.com -- Your Brain is Hungry. InformIT - Creating a Database From: Using Oracle8

max_dump_file_size = 10240 Create an initialization file 1. Copy the template file. In UNIX, copy $ORACLE_HOME/rdbms/install/rdbms/initx.orc to $ORACLE_HOME/dbs/initSID.ora. In Windows NT, copy $ORACLE_HOME\database\initorcl.ora to $ORACLE_HOME\database\initSID.ora. 2. Edit the initSID.ora by changing the following parameters: Parameter %pfile_dir% %config_ora_file% %rollback_segs% %init_ora_comments% UNIX Setting ?/dbs configSID.ora (created next) r01, r02, # Windows NT Setting ?/database configSID.ora (created next) r01, r02, #

Create configSID.ora 1. In UNIX, copy ?/rdbms/install/rdbms/cnfg.orc to ?/dbs/configSID.ora. In Windows NT, copy configorcl.ora to configSID.ora. 2. Edit the configSID.ora file with any ASCII text editor and set the following parameters: control_files, background_dump_dest, user_dump_dest, and db_name. Create the database script 1. Copy $ORACLE_HOME/rdbms/install/rdbms/crdb.orc to $ORACLE_HOME/dbs/crdbSID.sql. 2. Modify the crdbSID.sql file to set the following to the appropriate values: db_name, maxinstances, maxlogfiles, db_char_set, system_file, system_size, log1_file, log1_size, log2_file, log2_size, log3_file, and log3_size. When it's run, the crdbSID.sql does the following: q Runs the catalog.sql script, which will create the data dictionary q Creates an additional rollback segment, r0, in SYSTEM q Creates the tablespaces rbs, temporary, tools, and users q Creates additional rollback segments r01, r02, r03, and r04 in rbs q Drops the rollback segment r0 in SYSTEM q Changes temporary tablespaces for SYS and SYSTEM q Runs catdbsyn.sql as SYSTEM to create private synonyms for DBA-only dictionary views

Getting Ready to Create a Database


Creating a database is the first step in organizing and managing a database system. You can use the following guidelines for database creation on all operating systems. Check your operating system-specific documentation for platform-specific instructions. Before creating a database, take a complete backup of all your existing databases to protect against accidental modifications/deletions of existing files during database creation. The backup should contain parameter files, data files, redo log files, and control files. Mirror your control and redo log files The control files and redo log files help you recover your database. To keep from losing a control file, keep at least two copies of it active on different physical devices. Also, multiplex the redo log files and place the log group members on different disks. Also decide on a backup strategy and the size that will be required for online and archived redo logs. Backup strategies are discussed in Chapter 13, "Selecting and Implementing a Backup Strategy."

Organizing the Database Contents


You organize the database contents by using tablespaces. On some platforms, the Oracle installer creates a seed database, which has a number of predefined tablespaces. The tablespace structure should be carefully chosen by considering the characteristics of the data to minimize disk contention and fragmentation, and to improve

http://www.informit.com/content/0789716534/element_002.shtml (4 of 13) [26.05.2000 16:46:14]

informit.com -- Your Brain is Hungry. InformIT - Creating a Database From: Using Oracle8

overall performance. In addition to the SYSTEM tablespace provided with the installation, Table 2.2 describes several other suggested tablespaces. You can create these tablespaces by using the CREATE TABLESPACE command, as shown later in the section "Using the CREATE DATABASE Command." Use multiple tablespaces Production data and indexes should be stored in separate tablespaces. Table 2.2 Suggested tablespaces to be created with the database Tablespace: TEMP RBS TOOLS APPS_DATA APPS_IDX Description: Used for sorting and contains temporary segments Stores additional rollback segments Tables needed by the Oracle Server tools Stores production data Store indexes associated with production data in APPS_DATA tablespace

Designing a Database Structure to Reduce Contention and Fragmentation


Separating groups of objects, such as tables with different fragmentation propensity, can minimize contention and fragmentation. You can use Table 2.3 as a guideline for separating objects. Table 2.3 Fragmentation propensity Segment Type: Data dictionary Rollback segments Temporary segments Application data Fragmentation: Zero Medium High Low

You can reduce disk contention by being familiar with the way in which data is accessed and by separating the data segments into groups based on their usage, such as separating q Segments with different backup needs q Segments with different security needs q Segments belonging to different projects q Large segments from smaller segments q Rollback segments from other segments q Temporary segments from other segments q Data segments from index segments Database sizing issues should be considered to estimate the size of the tables and indexes.

Decide on the Database Character Set


After the database is created, you can't change the character set without recreating the database. If users will access the database by using a different character set, the database character set should be the same as or a superset of all the character sets that would be used. Oracle8 uses encoding schemes that can be commonly characterized as single-byte 7-bit, single-byte 8-bit, varying-width multi-byte, and fixed-width multi-byte. Refer to the Oracle8 Server reference guide for limitations on using these schemes. SEE ALSO For more information on Oracle's National Language Support (NLS) feature and character sets,

Start the Instance


Make sure that the following parameters are set properly in the environment. If the following parameters aren't set properly, your instance won't start or the wrong instance might start: q ORACLE_SID. This parameter is used by Oracle to determine the instance to which the user will
http://www.informit.com/content/0789716534/element_002.shtml (5 of 13) [26.05.2000 16:46:14]

informit.com -- Your Brain is Hungry. InformIT - Creating a Database From: Using Oracle8

q q q

connect. If ORACLE_SID isn't set properly and the CREATE DATABASE statement is run, you can wipe out your existing database and all its data. ORACLE_HOME. This parameter shows the full pathname of the Oracle system home directory. PATH. It should include $ORACLE_HOME. ORA_NLS. This is the path to the language object files. If ORA_NLS isn't set and the database is started with languages and character sets other than the database defaults, they won't be recognized.

After the following environment variables are verified, you can connect to Server Manager as internal and STARTUP NOMOUNT. Set the environment variables in UNIX 1. Set the ORACLE_SID variable as follows for the sh shell (XXX is your SID): ORACLE_SID XXX; export ORACLE_SID 2. Set the variable as follows for the csh shell: setenv ORACLE_SID XXX 3. Verify that ORACLE_SID has been set: Echo $ORACLE_SID 4. Start up the instance in nomount state: $svrmgrl SVRMGR> Connect internal SVRMGR> Startup nomount Set the environment variables in Windows NT 1. Use regedt32 to set the variables in the Registry's \HKEY_LOCAL_MACHINE\SOFTWARE\ORACLE hive. Or, from a DOS prompt, type C: > set ORACLE_SID=XXX where XXX is your SID name (maximum of four characters). 2. Use the Services tool in the Windows Control Panel to ensure that the ORACLESERVICESID service is started. Using Instance Manager on Windows NT On Windows NT, you can use the ORADIM utility (Instance Manager) to create a new instance and service for your database.

Choosing the Method for Creating the Database


You have several options to create the database: q You can use the Oracle installer to create a database-this is probably the easiest method of database creation because it allows the creation of a seed database, which you can use as a template to create new databases. Check your installation guide for platform-specific instructions for creating the seed database which has a fixed platform-specific schema. This method is discussed in the following section. q You can modify the create database scripts provided with Oracle as desired to create a database with your own schema. The name and location of these scripts varies with the operating system. On Windows 95/NT, the BUILDALL.SQL and BUILD_DB.SQL scripts can be used as a starting point for database creation. On UNIX systems, the crdbSID.sql is a similar script for database creation. By using this method, you can copy these mentioned scripts, make necessary changes to create the database, and then run the scripts to create the database. You can specify parameters such as MAXDATAFILES and specify multiple SYSTEM tablespace database files by using this method. q Manually create the database by executing the CREATE DATABASE command. Refer to the Oracle SQL reference manual for the complete syntax. This method allows for more flexibility by allowing you to specify MAXDATAFILES on parameters or multiple SYSTEM tablespace data files.But, there's also more possibility of syntax errors. After the database is created, you can run catalog.sql and catproc.sql while connected as the SYS of "internal"

http://www.informit.com/content/0789716534/element_002.shtml (6 of 13) [26.05.2000 16:46:14]

informit.com -- Your Brain is Hungry. InformIT - Creating a Database From: Using Oracle8

account to create the data dictionary views. After the database is created, the SYSTEM tablespace and SYSTEM rollback segment will exist. A second rollback segment must be created and activated in the SYSTEM tablespace before any other tablespace can be created in the database. To create a rollback segment, from the Server Manager prompt type Svrmgr>Create rollback segment newsegment Tablespace system Storage (...); Refer to the SQL Language manual for the complete syntax of the CREATE ROLLBACK SEGMENT command.

Using the Oracle Installer (ORAINST) to Create a Database


This menu-driven method is probably the easiest because it runs the necessary scripts for any selected product. You can use this method to create a seed database. The installation guide for your platform should have specific instructions for this purpose. Oracle's installer isn't very flexible Using the Oracle installer for database creation isn't as flexible as the preceding methods in terms of specifying parameters such as MAXDATAFILES. If this method is used, you'll have to create the other standard non-system tablespaces.

Using the CREATE DATABASE Command


You also can create a database by using the SQL command CREATE DATABASE: CREATE DATABASE database [CONTROLFILE [REUSE]] [LOGFILE filespec[, ...]] MAXLOGFILES integer MAXLOGMEMBERS integer MAXLOGHISTORY integer DATAFILE filespec[, ...] MAXDATAFILES integer MAXINSTANCES integer ARCHIVELOG|NOARCHIVELOG EXCLUSIVE CHARACTERSET charset Table 2.4 lists the settings available with the CREATE DATABASE command. Table 2.4 CREATE DATABASE settings Option: database CONTROLFILE REUSE Description: The name of the database to be created. Specifies that existing control files specified by the CONTROL_FILES parameter can be reused. If REUSE is omitted and control files exist, you'll get an error. Specifies one or more files to be used as redo log files. Each filespec specifies a redo log file group containing one or more redo log file members or copies. If you omit this parameter, Oracle will create two redo log file groups by default. Specifies the maximum number of redo log file groups that can ever be created for this database. Specifies the maximum number of members or copies for a redo log file group. This parameter is useful only if you're using the PARALLEL SERVER option and in parallel and ARCHIVELOG mode. It specifies the maximum number of archived redo log files for automatic media recovery. Specifies one or more files to be used as data files.

LOGFILE

MAXLOGFILES MAXLOGMEMBERS MAXLOGHISTORY

DATAFILE

http://www.informit.com/content/0789716534/element_002.shtml (7 of 13) [26.05.2000 16:46:14]

informit.com -- Your Brain is Hungry. InformIT - Creating a Database From: Using Oracle8

MAXDATAFILES MAXINSTANCES ARCHIVELOG or NOARCHIVELOG EXCLUSIVE CHARACTERSET

Specifies the maximum number of data files that can ever be created for this database. Specifies the maximum number of instances that can simultaneously have this parameter mounted and open. Establishes the mode for the redo log files groups. NOARCHIVELOG is the default mode. Mounts the database in the exclusive mode after it's created. In this mode, only one instance can access the database. Specifies the character set the database uses to store the data. This parameter can't be changed after the database is created. The supported character sets and default value of this parameter are operating system dependent.

Oracle performs the following operations when executing the CREATE DATABASE command: q Creates the data files as specified (if previously existing data files are specified, their data is erased) q Creates and initializes the specified control files q Creates and initializes the redo logs as specified q Creates the SYSTEM tablespace and the SYSTEM rollback segment q Creates the data dictionary q Creates the SYS and SYSTEM users q Specifies the character set for the database q Mounts and opens the database The data dictionary may not be created automatically You need to run the SQL scripts to create the data dictionary (catalog.sql and catproc.sql) if these scripts aren't run from your database creation script. The following example shows how to create a simple database: create database test controlfile reuse logfile GROUP 1 ('C:\ORANT\DATABASE\log1atest.ora', 'D:\log1btest.ora') size 500K reuse, GROUP 2 ( 'C:\ORANT\DATABASE\log2atest.ora', 'D:\log2btest.ora' ) size 500K reuse datafile 'C:\ORANT\DATABASE\sys1test.ora' size 10M reuse autoextend on next 10M maxsize 200M character set WE8ISO8859P1; This command creates a database called TEST with one data file (sys1test.ora) that's 10MB in size and multiplexed redo log files with a size of 500KB each. The character set will be WE8ISO8859P1.

Creating a Database from the Seed Database


The following steps can be used to create a database called MARS, using the starter (seed) database ORCL. If you don't have the starter database, you can use the sample initialization file INITORCL.80 in the c:\orant\database directory. Create a database in Windows NT with BUILD_ALL.sql 1. Create a directory called MARS. 2. Copy C:\ORANT\DATABASE\INITORCL.ORA to C:\MARS. 3. Modify the DB_NAME, CONTROL_FILES, GLOBAL_NAMES, and DB_FILES parameters in the INITMARS.ORA file. 4. Use the ORADIM80 command to create the service. For example, from a DOS prompt, type C: > oradim80 -NEW -SID TEST -INTPWD password

http://www.informit.com/content/0789716534/element_002.shtml (8 of 13) [26.05.2000 16:46:14]

informit.com -- Your Brain is Hungry. InformIT - Creating a Database From: Using Oracle8

-STARTMODE AUTO -PFILE c:\orant\database\inittest.ora This command creates a new service called TEST, which is started automatically when Windows NT starts. INTPWD is the password for the "internal" account; the PFILE parameter provides the full pathname of initSID.ora. When to create Oracle services An Oracle serviceshould be created and started only if you want to create a database and don't have any other database on your system, or copy an existing database to a new database and retain the old data-base. 5. Set ORACLE_SID to MARS: C: > Set ORACLE_SID=MARS Copy the BUILD_DB.SQL script to c:\mars. Edit the BUILD_MARS.SQL script as follows: r Set PFILE to the full pathname for INITMARS.ORA. r Change CREATE DATABASE ORACLE to CREATE DATABASE MARS. r Change the data files and log filenames to the appropriate names. r Modify the location of the Oracle home directory to point to C:\MARS. Use Control Panel's Services tools to verify that the service ORACLESERVICEMARS is started. If it's not started, start it. Start Server Manager and connect to the database as "internal":

6. 7.

8. 9.

C: > svrmgr30 C: > connect internal/password 10. Start the database in the NOMOUNT state: SVRMGR> STARTUP NOMOUNT PFILE=c:\mars\initmars.ora 11. Turn on spooling to trap error messages and run BUILD_MARS.SQL: SVRMGR> SPOOL build.log SVRMGR> @BUILD_MARS.SQL If there are errors while running BUILD_MARS.SQL, fix the errors and rerun the script for successful completion. 12. Generate the data dictionary by running CATALOG.SQL: SVRMGR> @%RDBMS80%\ADMIN\CATALOG.SQL 13. Run CATPROC.SQL to generate the objects used by PL/SQL: SVRMGR> @%RDBMS80%\ADMIN\CATPROC.SQL 14. If you want additional features, run the appropriate scripts, such as CATREP8M.SQL for Advanced Replication. 15. Turn off spooling and check the log for errors. All the MAX parameters are set when the database is created. To determine what parameters your database has been created with, execute the following: SVRMGR> Alter database backup controlfile to trace This command will create an SQL script that contains several database commands: CREATE CONTROLFILE REUSE DATABASE "SJR" NORESETLOGS NOARCHIVELOG MAXLOGFILES 32 MAXLOGMEMBERS 2 MAXDATAFILES 254 MAXINSTANCES 1

http://www.informit.com/content/0789716534/element_002.shtml (9 of 13) [26.05.2000 16:46:14]

informit.com -- Your Brain is Hungry. InformIT - Creating a Database From: Using Oracle8

MAXLOGHISTORY 899 LOGFILE GROUP 1 'E:\ORANT\DATABASE\LOGSJR1.ORA' GROUP 2 'E:\ORANT\DATABASE\LOGSJR2.ORA' DATAFILE 'E:\ORANT\DATABASE\SYS1SJR.ORA', 'E:\ORANT\DATABASE\RBS1SJR.ORA', 'E:\ORANT\DATABASE\USR1SJR.ORA', 'E:\ORANT\DATABASE\TMP1SJR.ORA', 'E:\ORANT\DATABASE\INDX1SJR.ORA' ;

SIZE 200K, SIZE 200K

To generate SQL statements for all the objects in the database, Export must query the data dictionary to find the relevant information about each object. Export uses the view definitions in CATEXP.SQL to get the information it needs. Run this script while connected as SYS or "internal." The views created by CATEXP.SQL are also used by the Import utility. Chapter 25, "Using SQL*Loader and Export/Import," discusses more about Oracle's Export and Import utilities. CATALOG.SQL and CATEXP.SQL views don't depend on each other You don't need to run CATALOG.SQL before running CATEXP.SQL, even though CATEXP.SQL is called from within CATALOG.SQL. This is because no view in CATEXP.SQL depends on views defined in CATALOG.SQL. Create an identical copy of database but with no data 1. Do a full database export with ROWS=N: C: > exp system/manager full=y rows=n file=fullexp.dmp This will create a full database export (full=y) without any rows (rows=n). 2. Run a full database import with ROWS=N: C: > imp system/manager full=y rows=n file=fullexp.dmp Creating a new database on the same machine If the new database is to be created on the same machine as the old database, you need to pre-create the new tablespaces because the old data files are already in use. Use Instance Manager to create a new database in Windows NT 1. From the Start menu choose Oracle for Windows NT and then NT Instance Manager. This will start the Instance Manager and show you the status and startup mode of all the SIDs (see Figure 2.1). Figure 2.1 : The Instance Manager dialog box shows the available instances. 2. Click the New button and supply the SID, internal password, and startup specifications for the new instance (see Figure 2.2). Figure 2.2 : Provide the specifications for the new instance. 3. Click the Advanced button and choose appropriate database name, logfile, and data file parameters and a character set for the new database (see Figure 2.3). Figure 2.3 : Provide the specifications for the new database. The Oracle Database Assistant can be used to create a database at any time. Use Oracle Database Assistant to create a new database in Windows NT 1. From the Start menu choose Programs, Oracle for Windows NT, Oracle Database Assistant. 2. Select Create a Database and click Next. 3. Choose the Typical or Custom option and click Next. The Custom option lets you to customize the parameters of the database that you're trying to create. 4. Choose Finish. In Windows NT, you can set the default SID by setting the Registry entry ORACLE_SID. Updating ORACLE_SID in the Windows NT Registry 1. From the DOS command prompt, type REGEDT32. Don't modify the Registry unless you know what you're doing!

http://www.informit.com/content/0789716534/element_002.shtml (10 of 13) [26.05.2000 16:46:14]

informit.com -- Your Brain is Hungry. InformIT - Creating a Database From: Using Oracle8

Be extremely careful when working with the Registry. Improperly set keys may prevent Windows NT from booting up. 2. Choose the key \HKEY_LOCAL_MACHINE\ SOFTWARE\ORACLE\HOMEID. 3. From the Edit menu choose Add Value. 4. In the Value Name text box, type ORACLE_SID. 5. For the Data Type, choose REG_EXPAND_SZ. 6. Click OK. 7. Type your SID name in the String Editor text box and click OK. 8. Exit the Registry.

Checking the Status of your Database


After the database is created, regularly check the status of the database by examining its data dictionary and the alert log.

Examining Basic Views


The data dictionary is one of the most important parts of the Oracle database. The data dictionary is a set of tables and views that you can use to look up valuable information about the database. You can use the data dictionary to obtain various types of information, including: q General database structure q Information about schema objects q Integrity constraints q Database users q Privileges and roles q Space allocated to database objects The catalog.sql and catproc.sql scripts can be used during or after database creation to create the commonly used data dictionary views and for PL/SQL support, respectively. The data dictionary contains a set of base tables and associated set of views that can be placed in the following categories: View Category Description Views accessible by any user that provide information on objects owned by them USER_xxx Views accessible by any user that provide information on all objects accessible by ALL_xxx them Views accessible by any user that provide information on any database object DBA_xxx Which data dictionary objects do I have? All the data dictionary tables and views are owned by SYS. You can query the DICTIONARY table to obtain the list of all dictionary views. The following examples show how to query the dictionary tables to obtain information about the database: q To identify all the rollback segments in the current database and their status, use Select * from dba_rollback_segs; To identify all the data files in the current database and their status, use Select * from dba_data_files; To identify all the tablespaces in the current database and their status, use Select * from dba_tablespaces; To identify all the users belonging to this database, use Select * from dba_users;
http://www.informit.com/content/0789716534/element_002.shtml (11 of 13) [26.05.2000 16:46:15]

informit.com -- Your Brain is Hungry. InformIT - Creating a Database From: Using Oracle8
q

To find out if the database is in the ARCHIVELOG mode, use Select * from v$database; To identify all the parameter values in use for the data-base, use Select * from v$parameter;

Checking the Oracle Alert Log


When diagnosing a database problem, the first place to look for information and errors is the alert log (the name is operating system dependent). If this file isn't present, Oracle will automatically create it during database startup. This file can point you to the location of trace files, which can give a lot of insight into the problems encountered. It also contains additional information to indicate the status of the database and what's now happening in the database. Locating trace files The trace file would be located in the directory specified by BACKGROUND_DUMP_DEST, USER_DUMP_DEST, or CORE_DUMP_DEST, depending on the exact error and its cause. SEE ALSO For more information on the contents and usage of the alert log, When the database is started, the following information is recorded in the alert log: q All the init.ora parameters q Informational messages indicating that the background processes have been started q The thread used by the instance q The log sequence that the LGWR is now writing to In general, the alert log records all important incidents of the database, including: q Database startups q Database shutdowns q Rollback segment creations q Tablespace creations q ALTER statements issued q Log switches q Error messages Each entry has a timestamp associated with it, and each non-error message has an entry marking its beginning and another entry marking its successful completion. You should frequently check this file for error messages for which the alert log will point to a trace file for more information. The following is a sample alert log:

File header showing information about your system Initialization parameters Database in nomount state and CREATE DATABASE command.

LOGFILE 'E:\ORANT\database\logSJR1.ora' SIZE 200K, 'E:\ORANT\database\logSJR2.ora' SIZE 200K MAXLOGFILES 32 MAXLOGMEMBERS 2 MAXLOGHISTORY 1 DATAFILE 'E:\ORANT\database\Sys1SJR.ora' SIZE 50M MAXDATAFILES 254 MAXINSTANCES 1 CHARACTER SET WE8ISO8859P1
http://www.informit.com/content/0789716534/element_002.shtml (12 of 13) [26.05.2000 16:46:15]

informit.com -- Your Brain is Hungry. InformIT - Creating a Database From: Using Oracle8

NATIONAL CHARACTER SET WE8ISO8859P1 Thu Jan 29 09:33:50 1998 Successful mount of redo thread 1. Thread 1 opened at log sequence 1 Current log# 1 seq# 1 mem# 0: E:\ORANT\DATABASE\LOGSJR1.ORA Successful open of redo thread 1. Thu Jan 29 09:33:50 1998 SMON: enabling cache recovery Thu Jan 29 09:33:50 1998 create tablespace SYSTEM datafile 'E:\ORANT\database\Sys1SJR.ora' SIZE 50M default storage (initial 10K next 10K) online Thu Jan 29 09:34:10 1998 Completed: create tablespace SYSTEM datafile 'E:\ORANT\datab Thu Jan 29 09:34:10 1998 create rollback segment SYSTEM tablespace SYSTEM storage (initial 50K next 50K) Completed: create rollback segment SYSTEM tablespace SYSTEM Thu Jan 29 09:34:14 1998 Thread 1 advanced to log sequence 2 Current log# 2 seq# 2 mem# 0: E:\ORANT\DATABASE\LOGSJR2.ORA Thread 1 cannot allocate new log, sequence 3 Checkpoint not complete Current log# 2 seq# 2 mem# 0: E:\ORANT\DATABASE\LOGSJR2.ORA Thread 1 advanced to log sequence 3 Current log# 1 seq# 3 mem# 0: E:\ORANT\DATABASE\LOGSJR1.ORA Thread 1 advanced to log sequence 4 Current log# 2 seq# 4 mem# 0: E:\ORANT\DATABASE\LOGSJR2.ORA < Back Contents Next >

Save to MyInformIT

Contact Us | Copyright, Terms & Conditions | Privacy Policy | Advertising Copyright 2000 InformIT. All rights reserved.

http://www.informit.com/content/0789716534/element_002.shtml (13 of 13) [26.05.2000 16:46:15]

informit.com -- Your Brain is Hungry. InformIT - Migrating an Oracle7 Database to Oracle8 From: Using Oracle8

You are here : Home : Using Oracle8

Exact Phrase All Words Search Tips

Migrating an Oracle7 Database to Oracle8


< Back Contents Next > Save to MyInformIT From: Using Oracle8 Author: David Austin Publisher: Que More Information

q q

Why Migrate? Selecting a Migration Method


r r r

Selecting Oracle's Migration Utility Using Export/Import Using Table Copying Identifying Types of Tests Setting Up a Test Program Testing and Retesting Executing the Migration Process with Oracle's Migration Utility Executing the Migration with Export/Import or Table Copying Precompiler Applications OCI Applications SQL*Plus Scripts SQL*Net Enterprise Backup Utility (EBU) Standby Databases

Testing the Migration


r r r

Performing the Migration


r r

Completing Post-Migration Steps


r r r r r r

Migration: Final Considerations

Why Migrate?
You may want to migrate an Oracle7 database to Oracle8 for a number of reasons. You may want to take advantage of one or more of Oracle8's new features, outlined in Appendix B, "What's New to Oracle8." You may simply want to benefit from the faster processing that the revised code tree should allow. Whatever the reason, you have a number of options regarding the method you can use to complete the migration process. One of them is a migration tool provided by Oracle. Although this chapter concentrates on the migration tool, it also discusses the alternatives. In the following section you learn about all the options. After reading it, you should be able to determine which method is best to use to migrate your database. The structural changes in Oracle8

http://www.informit.com/content/0789716534/element_003.shtml (1 of 16) [26.05.2000 16:46:26]

informit.com -- Your Brain is Hungry. InformIT - Migrating an Oracle7 Database to Oracle8 From: Using Oracle8

A migration is necessary because the new functionality in Oracle8 requires changes to the basic items in the data dictionary. Until the new dictionary is built, the Oracle8 kernel can't operate successfully. In addition, the structure of the data file header blocks has changed to support some of the new features. These changes must be in place for the code to work correctly. Unlike simple upgrades such as those you might have performed to move from version 7.2 to version 7.3, these structural changes require more than simply installing the new code and relinking the applications. For more details and a further discussion of the migration options you should read the Oracle8 Server Migration manual, part number A54650-01.

Selecting a Migration Method


The end result of a migration from Oracle7 is a database that contains essentially the same user objects as the original database, but in data files with updated headers and supported by a data dictionary that allows the new Oracle8 features. In some cases, this may not be your ultimate goal. For example, you might want to migrate only a portion of your database for testing purposes, where you want to test only a subset of your applications before migrating the entire database, or because you don't need objects created for now-obsolete portions of the application. On the other hand, you might want to use the downtime required for the migration to make some structural changes to the database. This might include moving segments between different tablespaces or may simply involve coalescing free space in one or more fragmented tablespaces. We will examine three basic approaches to migration in this chapter: Oracle's Migration utility, export/import, and table copying. You can read the details concerning each strategy in the following sections and decide which best suits you. To get started, look at Table 3.1 to see the basic features of each approach. Choose your best migration method Your choice of migration method will depend on how much you want to accomplish as part of the migration and on how much space you have to complete the task. It might also depend on the length of time you can afford to make the database inaccessible, because some methods take much longer than others. TABLE 3.1 Overview of migration options Option: Migration utility Export/Import Table copying Migrate Only: Yes No No Need for Additional Space: System tablespace Export dump file Two databases Time Requirements: Least Great Greatest

As you can see, the fastest approach, the one needing the least overhead, is the Migration utility. However, you can't include other database-restructuring or related changes if you use this method. The Migration utility will migrate your entire database as it is. With the two other options, you can make changes to the structure, layout, and tablespace assignments, but you'll need more time and disk resources to complete these tasks. They're also more complicated to complete because you need to perform a number of additional steps. The details of the steps needed to complete each type of migration (and the reasons for choosing each) are listed in the appropriate sections following. Table 3.2 summarizes these options. TABLE 3.2 Summary of migration method characteristics Migration Utility: Automatic: Requires little DBA intervention Requires minimal extra disk space Time is factor of number of objects, not database size Can only migrate forward Can't use for release to release All or nothing Export/Import: Requires a new database build Can use large amounts of disk space Very slow for large databases Can migrate forward and backward Can use for release to release Partial migration possible Copy Commands: Requires lots of attention Requires both databases to be online Very slow for large databases Can migrate forward and backward Can use for release to release Partial migration possible

http://www.informit.com/content/0789716534/element_003.shtml (2 of 16) [26.05.2000 16:46:26]

informit.com -- Your Brain is Hungry. InformIT - Migrating an Oracle7 Database to Oracle8 From: Using Oracle8

No structural changes can be made

Concurrent defragmentation and reorganization

Concurrent defragmentation and reorganization

Selecting Oracle's Migration Utility


You need to consider several key factors when planning to use the Migration utility: q Available space in the system tablespace q Need to migrate only a subset of the database q Resources to test a full database migration q Ancillary requirements, such as space defragmentation Oracle's Migration utility is designed to perform the required structural changes to your existing database. It will actually build a new Oracle8 data dictionary in the same system tablespace as your current Oracle7 dictionary, and it will restructure your rollback segments and the header blocks of the database's data files on disk (see Figure 3.1). Users' objects, such as tables and indexes, stay just as they are, although the ways in which they're accessed-through the data dictionary and then to the files where they really reside-are changed. Again, the new internal structure is designed to make the database more efficient and to support a new set of features. Figure 3.1 : The Migration utility changes database structures, including the data dictionary, rollback segments, and data file header blocks, in place. The migration process requires that your current Oracle7 and your new Oracle8 data dictionary reside in the database for a short period of time. This means that the system tablespace, the dictionary's home, must be large enough to hold both versions simultaneously. Therefore, the first item you need to consider before deciding whether to use the utility is the space you have available for the system tablespace. Your Oracle8 data dictionary will be about 50 percent larger than your Oracle7 dictionary. Of course, you may already have some of this space available, but most DBAs will find they need to add more. Space requirements for the Migration utility For a period of time, you'll need to have space for two versions of the data dictionary in the system tablespace and for two releases of Oracle in their respective Oracle Home directory structures. The data dictionary will require about 2 1/2 times the space now consumed in your system tablespace. The Oracle8 installation will take 500MB-more if you select many options. If you don't have the required disk space, either consider another migration strategy or wait until you can add the required capacity. Your next decision point for using, or not using, the Migration utility is whether you want to migrate the database as is, or if you also want to make some changes. The Migration utility is an all-or-nothing tool. You can't migrate portions of the database because the database is migrated in situ. Similarly, because the data isn't being moved, you can't move segments between tablespaces and you can't coalesce free space in your tablespaces as part of the migration process. Some DBAs don't like to make too many changes at one time, so even if they want to complete some restructuring tasks, they'll make these changes independently of the migration, completing them before or after the migration itself. Others have limited time windows in which to complete maintenance work and so try to make all required changes at a single time. You need to consider what else, if anything, you want to achieve as part of the migration processing. You also need to consider how much time you can take away from your user community while working on these steps. Consider test options for a migrated database If you want to convert your data-base to perform functional and similar tests, you can't use the Migration utility for continued work under Oracle7. If you make a copy of it first, you can convert the copy and simultaneously run the production Oracle7 database and the test Oracle8 version. However, making a copy is time-consuming, usually accomplished with the Export/ Import tools or with some form of data unloader and SQL*Loader. You can create a partial test database by using the tables for just one or two representative application tasks. You'd still need to complete integrated tests (if you needed them) after a full database conversion. If you only need to perform a migration, Oracle's utility is a good choice. First, it's relatively fast due to the changes being made on the current database structures. They don't have to be copied, moved, or otherwise duplicated-all relatively slow processes. In addition, the only factors that really affect the migration speed are the size of the System tablespace and the number of data files. The System tablespace is typically a small

http://www.informit.com/content/0789716534/element_003.shtml (3 of 16) [26.05.2000 16:46:26]

informit.com -- Your Brain is Hungry. InformIT - Migrating an Oracle7 Database to Oracle8 From: Using Oracle8

portion of the overall database, and the number of data files is limited to 1,022 in Oracle7. Thus, even the largest databases typically take no more than a day to migrate. Piecewise migration with Export/Import Using this technique to migrate your database one piece at a time requires you to keep both database versions available, which means maintaining the Oracle7 and Oracle8 executables online. Further complications from this approach occur if the data in the different versions is in any way related. You might need to have users switch between databases to perform different functions or temporarily build a distributed database environment. It may also require that parts of both databases be inactive when it's time to move additional segments from Oracle7 to Oracle8. If you're moving only part of your database because you don't need the rest of it, these issues become irrelevant. Later in this chapter's "Executing the Migration Process with Oracle's Migration Utility" section you'll find a detailed description of how to complete a migration with the Migration utility. First look at the other migration options and the test plan you need to construct, regardless of the migration approach you'll take.

Using Export/Import
If you decide to use the Export/Import tools to migrate your database, you need to plan for the following resources to be available: q Space to store the export file q Time to create the export q A copy of the Oracle8 executables q An empty Oracle8 database for the file being exported q Time to perform the import The amount of space and time needed for the initial export depends on the amount of data being exported. If you decide to move only part of your database to Oracle8, you need less time than if you are transferring the entire database. The time also depends on the speed of the devices to which you export. A fast disk drive allows a faster export than a slower tape drive. A very large database may also require a file too large for the operating system or for Oracle to handle. In this case you may need to use some form of operating system tool, such as a pipe, to move the data onto the appropriate media. Figure 3.2 shows the typical export/import steps. By using a pipe, you can send the output from your Export directly to the Import utility's input. Figure 3.2 : Migrating your database with export/import requires two distinct steps; the export dump file is the intermediary. Migrate your database via export/import 1. Perform a full database export from Oracle7, after which you can remove the Oracle7 database and the Oracle7 home directory structure. 2. Install Oracle8 and then alter the environment variables and your parameter file to point to the Oracle8 structures. (See Appendix C, "Installing Oracle8," for installation instructions.) 3. Create an Oracle8 database. 4. Add the required tablespaces to this database. 5. Perform a full import. Protect your current database Before beginning your migration, it's recommended that you ensure you have a backup of the home directory and of the database. Minimally, it's recommended that you keep the scripts you used to build the database in the first place; that way you have at least one easy way to reconstruct the database in case you run into problems with the Oracle8 version. SEE ALSO To learn how to create an Oracle8 database, To add the required tablespaces to the database, A variant of this method is to use an unload/loader approach to move the data. You can do this by building your own unloader utility or by finding one in the public domain. An unloader utility needs to extract the rows of data from your tables, as well as the definitions of the tables and all the other database objects; that includes indexes, userids, stored procedures, synonyms, and so on. You can also consider a hybrid approach, using the export to create only the object definitions and the unloader simply to create the row entries.

http://www.informit.com/content/0789716534/element_003.shtml (4 of 16) [26.05.2000 16:46:26]

informit.com -- Your Brain is Hungry. InformIT - Migrating an Oracle7 Database to Oracle8 From: Using Oracle8

Advantages of unloader/loader technique The big advantage to the unloader/loader approach is that you can use Oracle's SQL*Loader utility to reinsert the data when the definitions are applied to the data-base. This utility, running in its direct path mode or-even better if you have the hardware to support it-in parallel direct mode, can complete the job of loading records much more quickly than the Import program. SEE ALSO To learn more about the Export and Import utilities, To learn more about SQL*Loader,

Using Table Copying


As with export/import, you can use table copying to move just part of your database, to stage the migration, or simply to avoid migrating unneeded elements. The same caveats for incomplete database migration apply in this context as for the export/import approach. To perform table copying, you need to have both databases (Oracle7 and Oracle8) available simultaneously, which includes not just the database storage but the two Oracle Home structures and the environments to run them both simultaneously. This makes the approach the most space-intensive of all three methods (see Figure 3.3). Figure 3.3 : Both databases must remain online during a migration using table copying. The other drawback to this approach is that it really does only copy table definitions and their contents. To build the same database in the Oracle8 environment that you started with-including all the users, stored procedures, synonyms, views, and so on-you still need to find a method to copy these from Oracle7. It's therefore likely that you'll need to perform a partial export/import or even a data unload/reload, as discussed in the preceding section. General steps for migrating from Oracle7 to Oracle8 1. Install Oracle8. 2. Create an Oracle8 database. 3. Add the required tablespaces and users. 4. Select the Oracle7 or Oracle8 database as your primary database. 5. Configure SQL*Net or Net8 with a listener to connect to your secondary database. 6. Create or modify a TNSNAMES.ORA file to identify your secondary database. 7. Use SQL*Plus to issue the required COPY commands for your primary database. 8. Add the views, synonyms, stored procedures, and other objects dependent on the tables into your secondary database. 9. Drop your Oracle7 environment. Warning: Avoid points of no return As always, before moving or destroying your production system, you should make a backup first. Copying across database links Rather than use the SQL*Plus COPY command, you can complete step 7 by creating data-base links in your primary database to access the secondary database and by using SQL CREATE TABLE...AS SELECT commands to copy table definitions and data between your database. You find a detailed discussion of these steps later in the section "Executing the Migration with Export/Import or Table Copying."

Testing the Migration


Software development and maintenance efforts should always include a good test plan as part of the acceptance strategy. Your migration from Oracle7 to Oracle8 is no different. Indeed, because you already have a working production database, you need to devise a test strategy to ensure that the end results of the migration contain the same, or better, production capabilities. This means testing not only the capabilities, but the performance and results as well.
http://www.informit.com/content/0789716534/element_003.shtml (5 of 16) [26.05.2000 16:46:26]

informit.com -- Your Brain is Hungry. InformIT - Migrating an Oracle7 Database to Oracle8 From: Using Oracle8

You can apply a number of types of tests to help assure you and the users that everything is working properly at the end of the migration. In the next sections you see what types of tests you can use and when to use each type. From these selections, you can build a test program and identify the resources needed to complete the tests. Oracle Corporation strongly recommends running all these tests before concluding the migration.

Identifying Types of Tests


You can perform six basic types of tests to validate the migration: q Migration test q Minimal test q Functional test q Integration test q Performance test q Volume/load stress test Migration Test This test validates your migration strategy, whether it's to use the Migration utility or one approach to transfer the data from Oracle7 to Oracle8. It's intended to help you determine whether you've allotted sufficient resources for the migration, including disk space, time, and personnel. Running a migration test 1. Create either a test copy of your whole database, or a subset if you don't have the resources to test the complete database migration. You don't have to complete this step if you're using export/import or table copying. 2. Execute the migration with the utility you plan to use when migrating your production database. 3. Upgrade the tools used by your application. Resolving problems with migration tests If this test fails, you may need to rethink the chosen migration strategy and possibly choose another method. For example, if the time taken is greater than your users are willing to allow, you may need to plan a staged migration, or if you were hoping to use export/import, you may need to consider the Migration utility instead. Minimal Test This type of testing involves migrating all or part of an application and simply attempting to run it. No changes are made to the application, and performance and value testing should be attempted. This test simply confirms that the application can be started against the migrated database; it's not intended to reveal all problems that could occur. Resolving problems with minimal tests A failure of a minimal test typically indicates problems with the migration itself, such as missing tables, synonyms, views, or stored procedures. It's most likely to fail if you've tried a partial database migration or if you're using one of the data-transfer strategies rather than the Migration utility. You should perform this test after a successful migration test and before moving on to the more rigorous tests. It will require relinking the tools used by the applications, so you'll need to maintain a separate copy of the application if the users need to continue using the production Oracle7 database. Running a minimal test 1. Complete the migration test. 2. Have users, developers, or a test suite run selected programs from the applications. Functional Test The functional test follows the minimal test and ensures that the application runs just as it did before the migration. This involves having users, or simulated users, executing the different application components and verifying that the outcome is the same as in the pre-migrated database. The results of any queries, reports, or DML should be the same as they were on the pre-migrated database.

http://www.informit.com/content/0789716534/element_003.shtml (6 of 16) [26.05.2000 16:46:26]

informit.com -- Your Brain is Hungry. InformIT - Migrating an Oracle7 Database to Oracle8 From: Using Oracle8

If you're using Oracle8 to enhance the application, you may also want to add the new functionality to each application during this test phase to ensure that the application continues to provide reliable results with the new features in place. Conducting a functional test 1. Complete the migration and minimal tests. 2. If you intend to add new functionality to your applications, have the developers make these changes. 3. Have users, developers, or a test suite execute the applications, testing all functions and features. Tracking the cause of errors detected during functional testing may involve close cooperation between the DBA and the application developers. It's important, therefore, to ensure that the development organization is apprised of this testing phase and can commit the necessary resources. If you're running third-party application software, you may need to get help from your vendor should this test fail. Testing third-party applications Some vendors may not be aware of all the changes made in Oracle8. If you're using third-party applications, you shouldn't commit to a completed migration until the functional tests have been rigorously completed. Integration Test Integrated testing involves executing the application just as you did in the pre-migrated database. This includes establishing client/server connections, using any GUI interfaces, and executing testing online and batch functions. This test ensures that all the application's components continue to work together as before. Resolving problems with integration tests Should you run into problems with these tests, you'll have to isolate whether the cause is in a single component, such as SQL*Net or Net8, or whether it's part of the overall migration. Generally, if you've completed the functional testing successfully, the likelihood is that the problem is with one component, or the interface between a pair of components. Running an integration test 1. Complete the migration, minimal, and functional tests. 2. Install and configure any communication software, such as Net8, for client/server or multi-tier architectures. 3. Install and configure any drivers, such as ODBC drivers, that the applications use. 4. Have the users, developers, or a test suite run the applications across the network, using the same front-end tools and middleware that are now planned for database access. Performance Test Although the kernel code tree has been optimized in Oracle8, you might discover that some parts of your applications aren't running as well as before the migration. This could be due to a number of factors, such as tuning efforts that were made to avoid a problem in the earlier release. You need to run the performance tests to ensure that overall processing throughput is at least the same as, if not better than, the Oracle7 performance. Resolving problems with performance tests If you find performance problems, you should attempt to resolve them by using the database tuning techniques described in Chapter 20, "Tuning Your Memory Structures and File Access," through Chapter 23, "Diagnosing and Correcting Problems." Conducting a performance test 1. Complete the previous tests to ensure that you're running the equivalent of a full production system. 2. Have users run their interactive and batch programs as they would in a production environment. 3. Monitor and record the database performance by using queries against the various dynamic performance tables or by using such tools as the UTLBSTAT.SQL and UTLESTAT.SQL scripts. 4. Solicit feedback from users as to their perceptions of performance and response times compared to the current production system. If you've been monitoring your Oracle7 database with the various analytic and diagnostic tools, you can easily make comparisons by using the same tools on the migrated database. SEE ALSO
http://www.informit.com/content/0789716534/element_003.shtml (7 of 16) [26.05.2000 16:46:26]

informit.com -- Your Brain is Hungry. InformIT - Migrating an Oracle7 Database to Oracle8 From: Using Oracle8

For an overview of the dynamic performance tables, A detailed description of the UTLBSTAT.SQL and ULTESTAT.SQL utilities begins on Volume/Load Stress Test Ideally, you should be able to test your migrated database against a realistic workload. This includes the amount of data being processed (volume) and the concurrent demands on database (load). To perform such testing, you may need to set up automated procedures rather than expect your user community to test your database under realistic conditions while continuing work on the unmigrated production version. This test will ensure that the database is ready for the workload intended for it and should also display any other problems that the other tests didn't uncover. Performing volume/load stress tests 1. Assemble either a workforce or automated scripts to represent a normal, everyday workload. 2. Exercise the system by having the users or scripts execute the applications concurrently. 3. Monitor and record the system performance as in the performance testing. Building a load test If you have software that can capture the keystrokes entered during an interactive session, you can use this to collect the session work completed by the users in earlier tests. You can use these to build scripts that emulate those sessions. Run multiple concurrent copies of these scripts to simulate different levels of system load. Due to changes in the structure and use of internal structures-the data dictionary, rollback segments, and ROWIDs-you may find that the behavior of the database changes differently from the way it did in Oracle7. Although most resources won't reach a performance threshold as quickly as they might in Oracle7, you can't depend on this. It's therefore not advisable to assume that if you achieve performance equal to or better than Oracle7 with a small number of concurrent sessions manipulating a few tables, this performance level will be maintained under full volume and load. Addressing problems with a volume/load stress test Problems encountered while testing for volume and load should be addressed by applying the tuning strategies discussed in Chapters 20 through 23 of this book.

Setting Up a Test Program


Keeping in mind the various tests you need to perform, your test program should address the when, where, what, who, and how questions associated with each test. The test program should also address the methods you'll use to compare the actual results with what should be expected if the test is successful. This may include creating test suites that can be run on the current production database and on the Oracle8 test database; on the other hand, it could include simply recording the sizes of such objects as temporary segments and rollback segments in the test database so that you'll be prepared to size the associated tablespaces appropriately when the migration is performed for real. When to perform the tests depends on the resources you need and their availability. For example, DBAs are used to working on major database changes during periods of low activity, such as late at night, weekends, and holidays. If your test needs the participation of developers or end users, however, you may have to plan the test during normal working hours. Ensure the integrity of your test environment There's no point performing a test for validity after migration if the original version is flawed. Similarly, if you plan to test just part of your database, you need to ensure that you'll get a valid subset of the data. For example, if you're going to test a function that adds new records to a table and a sequence generator is used for the primary key values, you'll have to ensure that the sequence generator is avail-able in the pre-migration set of objects. Where to perform your tests depends on your computer environment. Ideally, you want to test as much of the database as you can-all of it if possible. This may require using a separate machine if one is available. A test or development machine is probably the best place to run the various tests. Remember, however, that you may have to schedule this machine if it's regularly used by the developers; some tests may require shutting down the Oracle7 database(s) running there. What to test depends on the test you're performing. By referring to the previous section's descriptions of the different tests, make sure that you have the resources to complete the test and record the findings in a

http://www.informit.com/content/0789716534/element_003.shtml (8 of 16) [26.05.2000 16:46:26]

informit.com -- Your Brain is Hungry. InformIT - Migrating an Oracle7 Database to Oracle8 From: Using Oracle8

meaningful way. For example, you won't learn anything about whether the Oracle8 performance is equal to, better than, or even worse than the Oracle7 performance if you don't have a method to record the performance characteristics you want to measure in each environment. Who to involve in the testing also depends on the type of test. The earlier test descriptions should help you identify the type of personnel needed for each one. You may want to form a migration team with members from your system support, developer, and end-user communities if you're going to migrate a large database. This team can help you schedule the tests in such a way that they don't cause major conflicts with other groups. For example, you would want to avoid running a test that needs input from the users during times of heavy workloads, such as month-end processing. The team can also help you find the best resources within their respective groups to aid with the tests and can act as your communication channel back to the various groups regarding the migration progress. How to complete the tests depends on your environment as well as the test type. You need to decide if you'll run tests on the whole database or on partial applications. This, of course, depends on the resources you have available. Similarly, you need to ensure that you have the resources, including people and tools, to fix any problems encountered during the testing so that you can keep the migration project on track. The individuals needed to fix a problem may not be the same as those involved in the test itself. The how question needs to include how you'll obtain your test data. If you want to test against the entire database, you'll need a method to create an exact copy of it, possibly on a separa ate machine. This could involve an export/import or some form of Unload/Reload utility. If using the latter, you need a verification test suite to ensure that the copy was successful. After your test plan is in place, you can begin the process of fully testing a migration. Ideally, you'll run every test on a complete test version of the migrated database before tackling the migration of the production system.

Testing and Retesting


As you complete each test in your test plan, you should be able to determine whether it's successful. If the test is successful, you can move to the next one; if it isn't, you need to fix the problem and retry the test. This part of the testing isn't always as straightforward as it sounds. Suppose you encounter an error that involves a missing view that should contain the join of two tables. The error could reflect that one of the two underlying tables is missing, or that the view definition is no longer available or valid. If you've performed a full migration, you need to determine whether the Migration utility "lost" the view or table or whether the view (or table) was missing before the migration was performed. If you don't have a copy of the pre-migrated database (or at least a way to reconstruct it), you can't determine the cause of the error. You'll have to go back and redo all the steps you took to get to this testing point, which may include copying the current production database over to your testing environment. Of course, because it has been in active use since you made your initial copy, the current copy you make won't be the same as the one you used for the test that failed. For example, the view may now have been deliberately dropped. You'll have to repeat all the tests to validate this new version of the database. If you're testing a subset of the database, the missing view or table may not have been created because it wasn't included in the objects selected for migration. In this case, you need to decide whether you can just add it now and continue with your testing. Otherwise, as in the preceding case, you'll need to start over with the migration of the test set from an Oracle7 source and repeat all the testing. If you run into a problem that you can't easily resolve, you have two options: q You can continue with further tests to see if a more obvious reason for the problem manifests itself. If you still can't determine the cause of the problem at the conclusion of your tests, you should repeat the entire migration process up to the point where the problem was observed. This time, make detailed notes or keep log files of all your activities so that you can file a detailed report with the Oracle Support organization, should the problem recur. q Simply restart the process as just discussed, without continuing further testing; keep detailed records in case you need to file a request for help with Oracle Support.

Performing the Migration


After you complete an acceptable test plan, you should use it to migrate and test a non-production version of your database. When you're certain that you're ready, you can perform your production system's migration. Skip sections that don't relate to your chosen migration approach

http://www.informit.com/content/0789716534/element_003.shtml (9 of 16) [26.05.2000 16:46:26]

informit.com -- Your Brain is Hungry. InformIT - Migrating an Oracle7 Database to Oracle8 From: Using Oracle8

If you're planning to use the Migration utility, continue with the following section. If you intend to use export/ import for your migration, skip to "Executing the Migration with Export/Import or Table Copying" in this chapter.

Executing the Migration Process with Oracle's Migration Utility


The database is addressed as a whole in the following, detailed descriptions of the tasks you'll have to perform to complete your database migration. If you need to migrate only a portion of your production database, you must create a temporary Oracle7 database to hold just that portion. Apply the migration processing to this temporary database only. General steps for preparing to migrate with the Migration utility 1. Load the Migration utility by using the Oracle8 installer. 2. Ensure that you have sufficient free space in the system tablespace. 3. Recover or remove any offline objects. 4. Remove any user with an ID of MIGRATE. 5. Override any in-doubt transactions. After you complete the first task (loading the Migration utility), you may want to put your database into restricted mode to prevent users from making unwanted changes as you prepare it for the migration. To do this, perform a shutdown and then reopen it with the RESTRICTED option. This will disconnect all current users and allow only those with the restricted session privilege to reconnect. If you're one of many DBAs with such a privilege, you should coordinate with your colleagues to ensure that only one of you is working on the migration process. SEE ALSO For details on the various options for starting a database, including the RESTRICTED option, Loading the Migration Utility The Migration utility for converting from Oracle7 to Oracle8 is provided as part of the Oracle8 installation media. You can load just the Migration utility by running the standard installation program for your specific hardware platform (the orainst program in UNIX and the SETUP.EXE program in Windows NT, for example). Suggested responses to the installer's questions 1. Choose Install, Upgrade, or De-install on the Select the Installer Activity screen. 2. Choose Migrate from Oracle7 to Oracle8 on the Select Installer Option screen. 3. Choose Install Migration Utility on the Select an Oracle7 to Oracle8 Migration Option screen. The installer will place a number of files into the Oracle7 home directory structure, including the following: q The Oracle8 Migration utility, placed in the bin sub-directory q The Oracle8 version of the message file, placed in the MESG subdirectory of the RDBMS subdirectory q The Oracle8 version of the MIGRATE.BSQ file, placed in the DBS subdirectory q Any required NLS files, placed in a subdirectory named data, which is in the path from the Oracle home directory that contains the subdirectory tree MIGRATE, NLS, and ADMIN. Confirm success of your installation Following the installation, you should check the log file to confirm that the files were successfully installed. Checking for Sufficient Space in the System Tablespace The Migration utility needs space for the Oracle7 and Oracle8 data dictionaries in the system tablespace. You can determine whether you have sufficient space available by using a special option in the Migrate utility. Simply run the utility with the CHECK_ONLY option set to TRUE: q On UNIX, the command is mig check_only=true On Windows NT, the command is

http://www.informit.com/content/0789716534/element_003.shtml (10 of 16) [26.05.2000 16:46:26]

informit.com -- Your Brain is Hungry. InformIT - Migrating an Oracle7 Database to Oracle8 From: Using Oracle8

mig80 check_only=true Depending on your operating system, the name of the utility and the format of the results will vary. You'll typically need free space equivalent to about 1 1/2 times the space consumed by your current data dictionary. Confirming That No Tablespaces or Data Files Need Recovery All offline tablespaces should be brought back online unless you're certain that they were taken offline by using the TEMPORARY or IMMEDIATE option. After you bring them back online, you can use one of these options to take them back offline. Unusable tablespaces If you can't bring a tablespace back online because it needs recovery that can't be completed, you need to drop it; it will be unusable under Oracle8 anyway. All data files must also be online. You can check the DBA_DATA_FILES view for the status. If any are offline and you can't bring them back online because they need recovery, the Migration utility will fail with errors. Don't have a user called MIGRATE The migration process will create a user called MIGRATE. Because this user is eventually dropped with the Oracle7 data dictionary objects, you should ensure that you don't already have a database user with this name. If you do, create a new schema to contain the MIGRATE user's objects, or use a user-level export and plan to reimport the user following the migration. In either case, remember to drop the MIGRATE user after you save the objects from the schema. See Chapter 9 "Creating and Managing User Accounts," for information about user and schema management. SEE ALSO For a brief discussion of views, Ensuring That You Don't Have Any Pending In-Doubt Transactions If you've used distributed transactions in your Oracle7 database, you need to check that none are still pending due to problems with the two-phase commit mechanism, such as lost network connections or offline databases. You can find such transactions by examining the DBA_2PC_PENDING table. If you have any such transactions, you need to commit or roll them back manually. You can find the instructions on how to do this in your Distributed Database documentation, including details on how to determine if you should commit or roll back. Performing a Normal Shutdown of the Oracle7 Database When you've readied your database for the migration by performing the preceding tasks, you can shut down your database. You need to shut it down cleanly-that is, with the NORMAL or IMMEDIATE option. If you can't do this and have to use the ABORT option, you need to restart the database and then shut it down again with one of the other options. This ensures that there are no pending transactions or incomplete checkpoints, leaving your database in the appropriate state for the migration. SEE ALSO For details on database shutdown options and commands, Backing Up the Database in Case of Problems After your database is shut down, you should make a full backup just in case the migration process needs to be repeated, as discussed in the earlier section on testing. The backup needs to be made any time you plan to migrate a database that has been opened subsequent to your last pre-migration backup-unless you don't mind losing the changes made during that period. Hot backup option before migration If you don't have the time to complete an offline backup, you can complete an online backup immediately before shutting it down for the migration. Remember that as soon as it's closed, you should back up the online redo logs as well. If you need to restore the Oracle7 version for another migration attempt, you have to recover the backup to a stable point, which requires the contents of the online redo. SEE ALSO For an overview of hot backup strategies,

http://www.informit.com/content/0789716534/element_003.shtml (11 of 16) [26.05.2000 16:46:26]

informit.com -- Your Brain is Hungry. InformIT - Migrating an Oracle7 Database to Oracle8 From: Using Oracle8

Detailed descriptions of hot backup steps are available on Run the Migration Utility You may need to set certain system values before running the Migration utility program. These will vary between operating systems, and you need to examine your platform-specific documentation for details on what to set and what values they require. For example, the TWO_TASK and ORA_NLS33 variables have to be set appropriately. You also need to use this documentation to find out how to run the migration program and provide the appropriate options. The options for the migration program are documented in Table 3.3. Table 3.3 Options for the Migration Program Name: CHECK_ONLY or NO_SPACE_CHECK Description and Use: These mutually exclusive options are used to determine whether the System tablespace is large enough to complete the migration or to avoid making this check. You should need the CHECK_ONLY option only in the premigration steps, as discussed earlier. This option specifies the name of the database to migrate. This option specifies the new name for the database. By default, the new name is DEFAULT, so you're strongly encouraged to set this value. This option changes the initial size of one specific data dictionary index. A value of 30 makes it three times larger, for example. The default value (15) should be adequate for most users. By setting this option, you can change the National Language Standard (NLS) NCHAR character set used for your database. Not setting this option leaves your Oracle7 character set in place. This is the name of the parameter file to be used by the instance in which the migration will occur. Not setting this option causes the default file to be used. This option names the full path and filename where the Migration utility will write its log file. When the Migration utility completes its processing, you should check the spool file to see if any errors occurred.

DBNAME NEW_DBNAME

MULTIPLIER

NLS_CHAR

PFILE

SPOOL

Don't open the database as an Oracle7 database at this point; further conversion steps need to be completed before the database is usable again. Prematurely opening the database corrupts this intermediate version and you won't be able to complete the migration process successfully. Time to take a backup You should make a backup of this version of the database because it can be used as your first Oracle8 backup, as well as an intermediate starting point for another migration attempt. Moving or Copying the Convert File The Migration utility created a convert file for you in the Oracle7 environment. This file will be found in the DBS, or related directory, under the Oracle7 home directory and will be named CONVSID.DBF (where SID is the Oracle7 instance name). You'll need to move this file to the corresponding directory in the Oracle8 home directory, renaming it to reflect the Oracle8 instance name if this is different. If you aren't going to uninstall Oracle7 at this time, you can wait and complete the file transfer in a single step. If you're going to uninstall Oracle7, make a copy of this file outside the Oracle directory structure so that you can find it later. Installing the Oracle8 Version of Oracle If you don't have space for the Oracle8 installation, you can remove the Oracle7 directory structure before beginning this step. However, it is recommended that you back it up first, in case you need to use your Oracle7 database again. Use the Oracle7 installer to uninstall Oracle7 and the Oracle8 installer to add the Oracle8 files. Your platform-specific documentation explains how to run the installer for both operations. When installing Oracle8, be sure to select the Install/Upgrade option in order to prevent Oracle from creating a brand-new database that you won't need. Adjusting Your Environment Variables and Parameter File
http://www.informit.com/content/0789716534/element_003.shtml (12 of 16) [26.05.2000 16:46:27]

informit.com -- Your Brain is Hungry. InformIT - Migrating an Oracle7 Database to Oracle8 From: Using Oracle8

You need to ensure that your operating system is aware of and using the new Oracle8 code before continuing with the migration process. The remaining migration tasks require the Oracle8 executables to manipulate your database. This means resetting the pointers to Oracle Home and related structures, whatever they might be for your operating system. Again, you need to refer to your platform-specific documentation if you aren't sure what these are. You also need to check your Oracle7 parameter file for obsolete or changed parameters. These are listed in the Oracle8 Server Migration Manual, available as part of the Oracle8 distribution media. Table 3.4 lists the non-platform-specific parameters that you need to address. Table 3.4 Obsolete and changed parameters Oracle7 Name: INIT_SQL_FILES LM_DOMAINS LM_NON_FAULT_TOLERANT PARALLEL_DEFAULT_SCANSIZE SEQUENCE_CACHE_HASH_BUCKETS SERIALIZABLE SESSION_CACHED_CURSORS SNAPSHOT_REFRESH_INTERVAL SNAPSHOT_REFRESH_PROCESS Obsolete: Yes Yes Yes Yes Yes Yes Yes No No Oracle8 Name:

JOB_QUEUE_INTERVAL JOB_QUEUE_PROCESSES

Use your favorite editor to make any necessary changes to your parameter file. You may also want to move it to a new directory so that it stays with your other Oracle8 files. If you use the default conventions for your parameter filename and location, see the Oracle8 documentation for your specific system to identify what these need to be. Removing or Renaming the Current Control and Convert Files You'll perform one conversion step a little later that will create new control files for your database. At this time, therefore, you should remove the control files your database was using. Drop them (if they're safely backed up) or rename them so that you can find them again if needed. If you've already uninstalled Oracle7, you should have copied the convert file to a safe place as discussed earlier in "Moving or Copying the Convert File." You should now move this copy to the appropriate directory in your Oracle8 Home directory structure. If you haven't uninstalled Oracle7, simply copy the file, renaming it if necessary, to the corresponding directory under Oracle8; see the earlier section titled "Moving or Copying the Convert File" for details. Starting an Instance Use Server Manager and the INTERNAL user to start an instance. You should then start a spool file to track the remaining conversion tasks performed on the database. You can use the following script to complete these steps by using Server Manager running in line mode: CONNECT INTERNAL STARTUP NOMOUNT SPOOL convert Complete the remaining database conversion activities 1. Issue the command ALTER DATABASE CONVERT to build new control files and update the data file header information. This is a point of no return! After ALTER DATABASE CONVERT completes, you can no longer use your database with Oracle7 code or programs. 2. Open the database, which will convert the rollback segments to their Oracle8 format. 3. Run the CAT8000.SQL script to do the following: Locating the CAT8000.SQL script and the log file

http://www.informit.com/content/0789716534/element_003.shtml (13 of 16) [26.05.2000 16:46:27]

informit.com -- Your Brain is Hungry. InformIT - Migrating an Oracle7 Database to Oracle8 From: Using Oracle8

If you aren't in the directory where the CAT8000.SQL script is located, you need to include the full path name. You'll find this script in the Oracle home directory, under the ADMIN directory, which is under the RDBMS directory. After issuing the host prompt to check for the session's log, you should find the log in your current directory. It will be named CONVERT.LST, but the name may be case sensitive on some operating systems. Convert the rollback segments to their Oracle8 format. r Update data dictionary components. r Drop the MIGRATE user with the Oracle7 data dictionary. 4. Shut down the database if the preceding tasks are all successful. 5. Complete the post-migration tasks.
r

Perform these steps while still connected to your Server Manager session by using the following commands: ALTER DATABASE CONVERT; ALTER DATABASE OPEN RESETLOGS; @CAT8000.SQL HOST

SHUTDOWN Start the CONVERT.LOG file here to check for errors, and then EXIT back to Server Manager. If you find errors in the log file, you may need to repeat the tasks discussed in this section; you may instead, depending on the severity of the problem, have to repeat most or all of the migration process after correcting the cause of the errors. If you've completed your migration at this point, you can skip the following discussion of alternate migration techniques. Continue with the section "Completing Post-Migration Steps" to learn how to make your Oracle8 database available to your applications and users.

Executing the Migration with Export/Import or Table Copying


In this section you look at two other options for migrating your database that you may want to use instead of the Migration utility. Both require you to move data between an Oracle7 and Oracle8 database. Therefore, unlike the Migration utility process, you have to build an Oracle8 database yourself, or let the installer build one for you. In all probability, you will need to customize any database you build to match the structure of your Oracle7 database. The main difference between these two approaches is that the export/import option lets you work on each database independently, so you can remove your Oracle7 database before building and populating your Oracle8 database. This can be useful if space is at a premium. The table-copying method requires that the Oracle7 and the Oracle8 databases be online during the migration process. If you have the space, you can also leave both databases in place during an export/import. However, if the Oracle7 database is available to users following the export, the changed data will have to be identified and migrated separately. In the following sections, it's assumed that you're going to keep your Oracle7 database in place during the whole migration process, but it's pointed out when you could drop it depending on what method you're using. The following steps are based on this assumption. Step 1: Install Oracle8 Install Oracle8 on your system by using the Oracle installer as described in your platform-specific documentation. You can choose to have the installer build your initial Oracle8 database if you prefer. Step 2: Prepare Your Oracle8 Database If you've let the installer build your database, you simply need to add the tablespaces that match your current Oracle7 structure. You should also add the same number of rollback segments and redo logs as you're now using in Oracle7. If you're using the table-copying method, you also need to create the users at this point. Copying tables across database links

http://www.informit.com/content/0789716534/element_003.shtml (14 of 16) [26.05.2000 16:46:27]

informit.com -- Your Brain is Hungry. InformIT - Migrating an Oracle7 Database to Oracle8 From: Using Oracle8

If you plan to use CREATE TABLE...AS SELECT commands to make table copies, you also need to build database links that allow the Oracle7 and Oracle8 databases to work together. Data-base links themselves are described in the Oracle8 SQL manual and in the Distributed Database documentation. If you aren't familiar with distributed processing and database links, this is probably not a good method to use for your migration. Step 3: Prepare to Migrate If you're performing the export/import process, you should now create the export file of your full database, or whatever pieces of the database you want to migrate. After this, you can shut down your Oracle7 database and uninstall Oracle7 if you want. If you're performing table copying, you need to define the network protocol and addresses for SQL*Net or Net8. SEE ALSO If you don't already have these tools configured, you might as well use Net8, which is discussed on Step 4: Move the Data Now you can move the data into the Oracle8 database. By using export/import, you simply execute the Oracle8 import command and provide the name of the file you exported in step 3. If you're performing table copying, you can use either the COPY command available in SQL*Plus or the SQL CREATE TABLE...AS SELECT command. The former identifies the target (Oracle8) or the source (Oracle7) database, or both, using SQL*Net or Net8 aliases from the TNSNAMES.ORA file. The latter uses a database link name in the new table name or the name of the table being copied, depending on where the command is running. If you're in the Oracle7 database, the link name is appended to the new table name; if you're in Oracle8, the link name goes on the source table name. Your database should be ready-if you used export/import-after you complete the data transfer. If you performed table copying, you may still need to duplicate the other objects in your Oracle7 database, such as indexes, views, synonyms, and privileges. The simplest way to do this is with an export/import of the full database. In this case, though, you wouldn't export the table rows and would have to allow the import to ignore errors due to existing tables.

Completing Post-Migration Steps


The following sections cover the steps needed to make the database accessible by the applications and the users of those applications. You might not need to follow each step exactly, depending on your system and application mix.

Precompiler Applications
Even if you don't intend to make any changes to your precompiler applications, you need to relink the applications before they will run against the Oracle8 database. You should relink them to the SQLLIB runtime library provided with the Oracle8 precompiler. Of course, if you want to take advantage of some new features of Oracle8, you need to modify your code and observe the standard precompile and compile steps.

OCI Applications
You can use your Oracle7 OCI applications with Oracle8 unchanged. If you have constraints in your applications, however, you should relink the applications with the Oracle8 runtime OCI library, OCILIB. You can choose a non-deferred mode to relink, in which case you'll experience Oracle7 performance levels, or you can use deferred mode linking to improve performance. The latter may not report any linking, bind, and define errors until later in the execution of the statements than you're used to seeing. Specifically, they will occur during DESCRIBE, EXECUTE, or FETCH calls rather than immediately after the bind and define operations. Obsolete OCI calls Two calls used in OCI programs, ORLON and OLON, are no longer supported in Oracle8; you should use OLOG in their place. Although OLOG was originally introduced for multithreaded applications, it's now required for single-threaded code.

http://www.informit.com/content/0789716534/element_003.shtml (15 of 16) [26.05.2000 16:46:27]

informit.com -- Your Brain is Hungry. InformIT - Migrating an Oracle7 Database to Oracle8 From: Using Oracle8

SQL*Plus Scripts
Ensure that your SQL*Plus scripts don't contain a SET COMPATIBILITY V7 command. If they do, change it to SET COMPATIBILITY V8. Also remember to check any LOGIN.SQL scripts for this command.

SQL*Net
The only severe problem you might run into with SQL*Net is if you're still using version 1. Oracle8 will only communicate via SQL*Net version 2 or Net8. The SQL*Net v2.0 Administrator's Guide and SQL*Net version 2 Migration Guide explain how to upgrade to version 2. As with other Oracle8 products, Net8 gives you a lot of additional features that you may want to consider using.

Enterprise Backup Utility (EBU)


Oracle8 has replaced the Enterprise Backup utility (EBU) with Recovery Manager (RMAN). Therefore, any code and routines you've developed around EBU will need to be replaced. In addition, the backup volumes created under EBU aren't usable by Oracle7. EBU and RMAN both use the same Media Management Language to talk to third-party storage subsystems, so you should still be able to use any tape subsystems and tape management modules that you used with EBU when you convert your backup routines to RMAN.

Standby Databases
A standby database must run on the exact same release as the production database that it mirrors. Therefore, you need to upgrade any standby database after you upgrade your Oracle7 production database. Migrate your standby database to Oracle8 1. Apply all redo logs created under Oracle7. 2. Ensure that the primary database is successfully opened under Oracle8. 3. Install Oracle8 on the standby database platform. 4. Copy the production database's control file and first data file to the standby site. 5. Make a new control file for the standby database. Impact of using new Oracle8 features If you begin using Oracle8's new features, you may have to make further changes to applications by using the products already discussed, and you may have to change code and procedures related to the tools listed here. For example, you have to run a CATEXP7.SQL script if you want to export Oracle8- partitioned tables to an Oracle7 database.

Migration: Final Considerations


The following Oracle products will run unchanged against your Oracle8 database: q Forms q Developer/2000 applications q PL/SQL q Export/Import You should consider the possible improvements you might obtain, however, if you begin using some of the appropriate Oracle8 enhancements. This doesn't have to be done immediately, of course, but over a period of weeks or months, as time permits. You should also ensure that the application developers are aware of the possible enhancements to their code. < Back Contents Next >

Save to MyInformIT

Contact Us | Copyright, Terms & Conditions | Privacy Policy | Advertising Copyright 2000 InformIT. All rights reserved.

http://www.informit.com/content/0789716534/element_003.shtml (16 of 16) [26.05.2000 16:46:27]

informit.com -- Your Brain is Hungry. InformIT - Managing with Oracle Enterprise Manager (OEM) From: Using Oracle8

You are here : Home : Using Oracle8

Exact Phrase All Words Search Tips

Managing with Oracle Enterprise Manager (OEM)


< Back Contents Next > Save to MyInformIT From: Using Oracle8 Author: David Austin Publisher: Que More Information

Introducing OEM Components


r r r r

OEM Console Common Services Intelligent Agents Application Programming Interface (API) Minimum Requirements Compatibility Issues Performing the OEM Installation User and Repository Setup Starting the Intelligent Agent and the Listener Testing the Configuration

Installing and Configuring OEM


r r r r r r

q q

Setting Up Preferred Credentials Setting Up Security


r r

Examples of Client Files Required by Enterprise Manager Examples of Server Files Required by Enterprise Manager Starting and Stopping Your Database Managing Users and Privileges Using OEM's Navigator Menu to Manipulate Users and Privileges Managing Database Storage Using Oracle Performance Manager Using Oracle Expert Using Oracle TopSessions

Basic Management Tasks with OEM


r r r r

Performing Advanced Management Tasks with the Performance Pack


r r r

q q q

Install and configure Oracle Enterprise Manager Set up the Repository Manage users and privileges

http://www.informit.com/content/0789716534/element_004.shtml (1 of 20) [26.05.2000 16:46:43]

informit.com -- Your Brain is Hungry. InformIT - Managing with Oracle Enterprise Manager (OEM) From: Using Oracle8
q q

Manage storage Tune your database

Introducing OEM Components


The Oracle Enterprise Manager combines a graphical console, agent processes, and common services to provide an integrated and comprehensive systems management platform for managing Oracle databases on the network. You can perform the following tasks from Enterprise Manager: Don't install OEM 1.2.2 and OEM 1.5.0 in Oracle 8.0.3 home OEM v1.2.2 isn't compatible with Oracle Server 7.3.3 and 8.0.x. OEM v1.5.0 shouldn't be installed in an Oracle Server 8.0.3 home directory. OEM's latest version (1.5.5), however, works fine for Oracle 7.3.3 and 8.0.x.
q q q q

Administer, diagnose, and tune multiple databases. Schedule jobs such as executing a SQL*Plus script on multiple nodes. Monitor objects and events such as database and node failures throughout the network. Integrate third-party tools.

Table 4.1 describes OEM's database application tools that allow you to perform the primary database administration tasks; Figure 4.1 shows these components. Figure 4.1 : OEM comprises various components that can be used for specific tasks. Backup Manager Data Manager Instance Manager Schema Manager Security Manager SQL Worksheet Storage Manager Lock Manager TopSessions Performance Manager Oracle Trace Navigator window Job Scheduling window Event Management window

Table 4.1 OEM components and their functions OEM Component Instance Manager TableSpace Manager Storage Manager Security Manager Schema Manager Server Manager Software Manager Backup Manager Data Manager SEE ALSO Using the Instance Manager, The Enterprise Manager environment consists of the following major components: q Oracle Enterprise Manager Console
http://www.informit.com/content/0789716534/element_004.shtml (2 of 20) [26.05.2000 16:46:43]

Function Manage instances, INIT.ORA file initialization parameters, and sessions Manage fragmentation and free space in tablespaces Manage tablespaces, data files, and rollback segments Manage users, roles, privileges, and profiles Manage schema objects such as tables, indexes, views, clusters, synonyms, and sequences Perform line-mode database operations from the client Manage the software distribution process Perform database backups and create backup scripts Perform export/import and data loads

informit.com -- Your Brain is Hungry. InformIT - Managing with Oracle Enterprise Manager (OEM) From: Using Oracle8
q

q q q q q

Services common to all OEM components, such as the Repository, discovery service, communication daemon, and job scheduling and event management systems Intelligent agent Integrated applications Application programming interfaces (APIs) Command Line Interface (CLI) Online help system

The basic OEM functionality is available to you with Oracle Server; however, you can install several optional management packs: Change Management Pack, Diagnostic Pack, and Tuning Pack.

OEM Console
The console user interface contains a set of windows that provide various views of the system. There's only one console per client machine. Table 4.2 describes the various components of the console. Table 4.2 Components of the console Component Navigator window Map window Job window Event Management window Description A tree view of all the objects in the system and their relationships Allows customization of the system views User interface to the Job Scheduling system User interface to the event management system

Common Services
The following services are common to various OEM components (see Table 4.3 for details on how these components interact): q The Repository is a set of tables in your schema, which can be placed in any Oracle database in the system. Information stored in the Repository includes the status of jobs and events, discovery cache, tasks performed, and messages from the notification queue in the communication daemon. To set up the Repository and manipulate it, you need to log on with an account that has DBA privileges. When logging in to Enterprise Manager, you're establishing a database connection into your Repository. At any given time, each user is connected to a single Repository. The connection to the Repository must be active during your working sessions. There can be more than one repository in the system. Repository tables can be installed on any Oracle database accessible to the console. A repository can be moved to another Oracle database. The Repository can be started or shut down from the Instance Manager but not from the console. Multiple repositories can exist within the same database You can use one repository, or you can switch between multiple repositories stored in the same database.
q

The communication daemon is a multithreaded process that manages console communication activities. It's responsible for communicating with agents and nodes for job scheduling and event monitoring, queuing and retrying failed jobs periodically, service discovery, contacting nodes periodically to determine their status, and maintaining a cache of connections to agents on nodes. The Job Scheduling System enables you to schedule jobs on remote sites by specifying the task to perform, the start time, and the frequency of execution. The Job System isn't usable if Oracle Intelligent Agent isn't installed and configured.

Reactive management is provided by the job and event systems You can use the Job and Event systems together to provide a reactive management system. This is achieved by allowing certain jobs to get executed when the specified events occur.
q

The Event Management System monitors events at remote sites, alerts you when a problem is detected, and optionally fixes it. In Windows NT, the application event log contains many of the errors detected by OEM. Security Services manages administrative privileges for nodes and services in the system. It manages a list of administrators who are notified when an event occurs. Service Discovery maintains an up-to-date view of the nodes and services being managed. The console's Navigator tree is populated with this information.

http://www.informit.com/content/0789716534/element_004.shtml (3 of 20) [26.05.2000 16:46:43]

informit.com -- Your Brain is Hungry. InformIT - Managing with Oracle Enterprise Manager (OEM) From: Using Oracle8

Table 4.3 Communication between OEM components Communication Path Console and communication daemon Description The console sends job and event requests to the communication daemon, and the status of these jobs and events are sent back to the console. Authentication requests of users logging in to the console are sent to the daemon. The daemon sends information to update the tree of nodes and services in the Navigator. Communication daemon and Job and event requests are handed to the Job or Event Management Common Services systems. The Common Services passes job and event status back to the communication daemon. Service Discovery information is passed from the Common Services to the daemon. Communication daemon and Agents communicate with the daemon to report results and status intelligent agent messages for jobs and events from the remote nodes. Common Services and Repository The Event Management and Job Management systems write event and job information, respectively, to the Repository. Figure 4.2 represents the communication path between different components of the Enterprise Manager in terms of the jobs, events, or any other requests logged in to the console. Figure 4.2 : Interaction between the various OEM components is well-defined.

Intelligent Agents
Intelligent agents are intelligent processes running on remote nodes. Each agent resides on the same node as the service it supports and can support all the services on that node. Intelligent agents perform the following functions: Use an intelligent agent to manage an older Oracle release Each intelligent agent is compatible with the database with which it's released and prior database releases. When used to manage an older release of the database, the intelligent agent must be installed in an ORACLE_HOME directory current with the agent release. Older releases of the intelligent agent aren't compatible with newer releases of the database.
q

q q q q q q q

Execute jobs or events from the console or third-party applications can be sent to the intelligent agent for execution. Cancel jobs or events as directed. Run jobs, collecting results and queuing them for the communication daemon. Run autonomously without requiring the console or the daemon to be running. Autonomously detect and take reactive measures (as specified by the administrator) to fix problems. Autonomously perform specified administrative tasks. Check and report events to the communication daemon. Handle SNMP requests if supported on the agent's platform.

An agent is required for all or some functionality of these components: Service Discovery; Job Control System; Event Management System; Backup Manager; Software Manager; Data Manager's Export, Import, and Load applications; Oracle Events; and Trace.

Application Programming Interface (API)


The APIs available with Enterprise Manager enable third-party applications-for example, applications that can analyze the data collected through Oracle Expert-to integrate the console with the Common Services. Third-party applications written in C++ that use OLE technology work very well with these APIs. Applications can be integrated at the console, service, or agent level; however, this integration depends on the third-party applications.

Installing and Configuring OEM


Several issues are involved in the installation and configuration of Enterprise Manager that you should address for the components to work together. Some issues include setting up the client, the server, and the Repository. Configuration involves setting preferred credentials and setting up security, among other things.
http://www.informit.com/content/0789716534/element_004.shtml (4 of 20) [26.05.2000 16:46:44]

informit.com -- Your Brain is Hungry. InformIT - Managing with Oracle Enterprise Manager (OEM) From: Using Oracle8

Not available for UNIX OEM is available only for Windows NT and Windows 95. However, the intelligent agent can run on UNIX or Windows NT.

Minimum Requirements
You need the following minimum hardware resources to install and use the OEM components: q Intel 486 PC or higher q VGA video (SVGA strongly recommended) q 32MB RAM q CD-ROM drive q Windows 95/NT-compatible network adapter q 25MB of hard disk space for Oracle Enterprise Manager, Net8, and required Oracle support files q 4MB of hard disk space for Oracle Enterprise Manager online documentation q 15MB of hard disk space for OEM's Performance Pack q Disk space for installing a local Oracle database or intelligent agent for Windows NT Installing documentation is optional The OEM documentation can take a lot of space. If you don't have enough disk space, you can run it from the CD-ROM when needed. The following minimum software resources are needed: q Microsoft Windows NT version 3.51 or higher, or Windows 95 q TCP/IP services

Compatibility Issues
Table 4.4 lists the components of Oracle Enterprise Manager version 1.5.0 and their compatibility with specific releases of Oracle Server. Table 4.4 Compatibility matrix for OEM 1.5.0 Feature Repository Local Remote Service Discovery Job Control System Event Management System Database Applications Backup Manager Instance Manager Schema Manager Security Manager Storage Manager SQL Worksheet Software Manager Utility Applications Data Manager/Export Data Manager/Import Data Manager/Load Performance Pack Expert Oracle Server 7.2 no yes yes yes yes yes yes yes yes yes yes no no no no yes Oracle Server 7.3 yes yes yes yes yes yes yes yes yes yes yes See /2/ yes yes yes yes Oracle Oracle Server 8.0.3 Server 8.0.4 See /1/ yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes

http://www.informit.com/content/0789716534/element_004.shtml (5 of 20) [26.05.2000 16:46:44]

informit.com -- Your Brain is Hungry. InformIT - Managing with Oracle Enterprise Manager (OEM) From: Using Oracle8

Lock Manager Oracle Events Performance Manager Tablespace Manager Top Sessions Trace /1/ /2/

yes no yes no yes no

yes yes yes yes yes yes

yes yes yes yes yes yes

yes yes yes yes yes yes

OEM 1.5 must be installed in a different home if there is a local 8.0.3 database. Software Manager can support Oracle Server 7.3.3 agents (Windows NT only) with upgraded OSM job files.

Performing the OEM Installation


Install and configure Enterprise Manager (general steps) 1. Configure Net8 locally and on the server. 2. Set up the client. 3. Set up the server. 4. Set up the Repository. 5. Install the OEM software on the client. 6. Install the intelligent agent on the server. The agent and the database that it services must be installed on the same node. Configuring Net8 You can use the following tools to generate the different files required for Net8 and Enterprise Manager: q Network Manager 3.1 for Windows to generate Net8 files and TNSNAMES.ORA q Oracle Topology Generator to generate TOPOLOGY.ORA q ASCII editor Net8 must be installed before installing OEM If Net8 isn't installed on the machine, select it from the OEM installer. You also can choose to install Performance Pack at this point. You need to configure Net8 so that it can use the database you want to access with OEM. You use Network Manager and Topology Generator to generate the TNSNAMES.ORA file, which will contain the information for the sample database. However, you need to edit this file with the Net8 easy configuration utility or a text editor so it will have information of other databases that you will use from OEM. Installing and Configuring the Intelligent Agent You can choose to install the intelligent agent as part of the Oracle Server installation, or you can install it later by running the Oracle installer. The following is required for the agent to function correctly: q Sun SPARCstation q Solaris version 2.4 q 32MB RAM q Oracle Server version 7.3 or higher q SQL*Net version 2 or higher (Net8) Before the agent is started, you must do the following to create a user account with appropriate privileges for the intelligent agent: q Run the CATSNMP.SQL script (found in the $ORACLE_HOME/rdbms/admin directory) from Server Manager to create a user for the intelligent agent and give that user DBA privileges. SEE ALSO Creating a user, Granting privileges to roles, q The SNMP.ORA file must be edited with any text editor. This file contains the address of the agent and

http://www.informit.com/content/0789716534/element_004.shtml (6 of 20) [26.05.2000 16:46:44]

informit.com -- Your Brain is Hungry. InformIT - Managing with Oracle Enterprise Manager (OEM) From: Using Oracle8

other listener information, which is read by the agent. It resides in the $ORACLE_HOME/network/admin directory on the database server. Install Oracle Enterprise Manager 1. Log in to Windows NT as the administrator or a user with permissions equivalent to an administrator. 2. Change to the \NT_x86\INSTALL directory on the CD-ROM drive. 3. Double-click ORAINST.EXE or SETUP.EXE to launch the Oracle installer. 4. Select Oracle Enterprise Manager to install the base product. The Installer will search for the TOPOLOGY.ORA and TNSNAMES.ORA files in the ORACLE_HOME\network\admin directory. If TOPOLOGY.ORA isn't found, an error message appears. If TNSNAMES.ORA is found but not TOPOLOGY.ORA, you'll be prompted to create the TOPOLOGY.ORA file by using the Oracle Network Topology Generator. If the TNSNAMES.ORA file isn't found, you can use the Oracle Network Manager to create the file. 5. Exit the installer after installation is complete. 6. Log off from Windows NT and then log in again. 7. If a local Oracle NT database is being accessed, you need to use Control Panel's Services tool to verify that the Oracle Service is started, and then start up the local NT database.

User and Repository Setup


Before Oracle Enterprise Manager is used, you must create a set of base tables that contain environment information for the managed databases-this is the Repository. You create the necessary tables in the Repository by using the SMPCRE.SQL and XPOCR.SQL scripts found in the $ORACLE_HOME/ rdbms/admin directory. An Oracle user must be created with appropriate permissions to access the Repository before the scripts are run. For each user that needs to access the console, a separate Repository must be created and setup scripts must be run. Console and repository compatibility The Repository must be compatible with the version of the Oracle Enterprise Manager. If the Repository version is older or newer than the console version, you must install a more recent compatible version of Enterprise Manager. Set up the user and Repository 1. Create a new user with Server Manager: SVRMGR> create user sysman identified by sysman 2. Grant this user the same privileges as SYSTEM: SVRMGR>grant dba to sysman 3. Connect as the new user: SVRMGR> connect sysman/sysman@testdb 4. Execute SMPCRE.SQL and XPOCR.SQL to build the tables and views required by the console and Expert: SVRMRG>@smpcre.sql SVRMGR>@xpocr.sql The user sysman can now log in to the Oracle Enterprise Manager.

Starting the Intelligent Agent and the Listener


For Enterprise Manager to connect and work successfully, the intelligent agent and the listener must be started on the server. For the client to communicate with the server, the communication daemon must be running. Starting the daemon The communication daemon is started and shut down automatically when Enterprise Manager is started. The following shows how to run the agent and the listener:
http://www.informit.com/content/0789716534/element_004.shtml (7 of 20) [26.05.2000 16:46:44]

informit.com -- Your Brain is Hungry. InformIT - Managing with Oracle Enterprise Manager (OEM) From: Using Oracle8

Task Start the agent Shut down the agent View agent's status Start the listener on UNIX Shut down the listener on UNIX

Command c: > net start oracleagent c: > net stop oracleagent c: > net start $ lsnrctl start testdblsnr $ lsnrctl stop testdblsnr

Start/stop the listener in Windows NT Use the Control Panel's Services tool to start and stop the listener in Windows NT.

Testing the Configuration


Test the configuration 1. Shut down the listener and the agent. $ lsnrctl stop testlsnr $ lsnrctl dbsnmp_stop 2. Start up and log in to Enterprise Manager. You should get the message ORA-12224: TNS: no listener occurs, and login is not possible. Enterprise Manager requires that the listener run on the server at all times. Logging in to OEM doesn't require the agent to be running Log in is possible without the agent running, because the agent is required only for jobs and events submitted or returned by the remote database or the console. 3. Start up the listener: $ lsnrctl start testlsnr 4. Double-click the Oracle Enterprise Manager icon from Program Manager and log into the database as sysman connected as sysdba (see Figure 4.3). Figure 4.3 : Log in to OEM by providing the information requested. The username to use for connecting to the database (for example, sysman) The password for the username The Net8 service name for the database to which you're connecting Connect as Normal, SYSOPER, or SYSDBA

Setting Up Preferred Credentials


You set up preferred credentials to avoid retying the service name, service type, and username for each database, listener, or node that the user intends to access. The Preferred Credentials page of the User Preferences dialog box shows the list of databases, listeners, and nodes in the network (see Figure 4.4). Figure 4.4: You can set preferred credentials from the console. Set the preferred credentials 1. From the console's File menu, choose Preferences. 2. Select any service from the list of entries in the dialog box and populate the Username, Password, Confirm, and Role text boxes.

Setting Up Security
The following operations on a remote instance require that security be set up for Enterprise Manager users: q STARTUP q SHUTDOWN
http://www.informit.com/content/0789716534/element_004.shtml (8 of 20) [26.05.2000 16:46:44]

informit.com -- Your Brain is Hungry. InformIT - Managing with Oracle Enterprise Manager (OEM) From: Using Oracle8
q q q q q q q q

ALTER DATABASE OPEN and MOUNT ALTER DATABASE BACKUP ARCHIVE LOG RECOVER All system privileges with ADMIN OPTION CREATE DATABASE Time-based recovery

Set up remote security 1. Create a password file by connecting to the oracle user account, changing to the ORACLE_HOME/dbs directory, and then using the orapwd utility in UNIX: $ orapwd file=orapwtestdb password=testpass entries=10 In this example, the SID is assumed to be testdb. 2. Grant appropriate roles: SVRMGR> grant sysdba to sysman SVRMGR> grant sysoper to sysman 3. Edit the INIT.ORA file to add the following entry: REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE 4. Shut down the instance. At this point, the database instance can be shut down from Enterprise Manager, but local security needs to be set up on Windows NT clients to start up the database from OEM. Set up local security 1. Download the INIT.ORA and CONFIG.ORA files from the server and copy them into the \OEM_directory\dbs directory on the Windows NT client. 2. On the client, edit the INIT.ORA file by using any text editor like Notepad, and change the "ifile" entry to the directory in which the CONFIG.ORA file is located, with the "ifile" set to the CONFIG.ORA file. 3. Restart the Enterprise Manager.

Examples of Client Files Required by Enterprise Manager


OEM uses several files on the client side-SQLNET.ORA, LISTENER.ORA, and TOPOLOGY.ORA-in the $ORACLE_HOME/network/admin directory. SQLNET.ORA contains optional parameters that can be used for tracing Net8 connections, whereas LISTENER.ORA is used to provide the address on which the listener listens. SQLNET.ORA ===========================================================

===========================================================

http://www.informit.com/content/0789716534/element_004.shtml (9 of 20) [26.05.2000 16:46:44]

informit.com -- Your Brain is Hungry. InformIT - Managing with Oracle Enterprise Manager (OEM) From: Using Oracle8

ORACLE_HOME is set to c:\orant Because the default domain and zone are set to world, the service names in TNSNAMES.ORA should have world tagged to them TNSNAMES.ORA ===========================================================

=========================================================== The domain world must match the SQLNET.ORA file Should match the port in the SNMP.ORA file Should match the port in the LISTENER.ORA file The database and SID name is test The port numbers in TNSNAMES.ORA must be unused by any other service and must be valid port numbers as per TCP/IP standards. TOPOLOGY.ORA ===========================================================
http://www.informit.com/content/0789716534/element_004.shtml (10 of 20) [26.05.2000 16:46:44]

informit.com -- Your Brain is Hungry. InformIT - Managing with Oracle Enterprise Manager (OEM) From: Using Oracle8

=========================================================== Should match agent name in TNSNAMES.ORA Should match listener name in LISTENER.ORA Database name should match the one in TNSNAMES.ORA

Examples of Server Files Required by Enterprise Manager


The agent uses several files on the server side-SQLNET.ORA, TNSNAMES.ORA, and LISTENER.ORA-that reside in the $ORACLE_HOME/network/agent directory. SQLNET.ORA contains optional parameters that can be used for tracing Net8 connections, whereas LISTENER.ORA is used to provide the address on which the listener listens. SQLNET.ORA =========================================================== AUTOMATIC_IPC=OFF trace_level_server=off TRACE_LEVEL_CLIENT=off SQLNET.EXPIRE_TIME=0 NAMES.DEFAULT_DOMAIN=world NAME.DEFAULT_ZONE=world SQLNET.CRYPTO_SEED="-2089208790-14606653312" =========================================================== TNSNAMES.ORA ======================= ===================================== #Database Addresses test.world=(description= (address_list= (address= (community=tcp.world) (protocol=tcp) (host=fastmachine) (port=1701) ) ) (connect_data= (sid=test) (global_name=test.world) ) ) =========================================================== LISTENER.ORA ===========================================================

http://www.informit.com/content/0789716534/element_004.shtml (11 of 20) [26.05.2000 16:46:44]

informit.com -- Your Brain is Hungry. InformIT - Managing with Oracle Enterprise Manager (OEM) From: Using Oracle8

=========================================================== Listener name, domain, host name, and SID are the same in all the other files Listener name, domain, host name, and SID are the same in all the other files Must match the port in TNSNAMES.ORA on the client and server machines Listener name, domain, host name, and SID are the same in all the other files SNMP.ORA ===========================================================

http://www.informit.com/content/0789716534/element_004.shtml (12 of 20) [26.05.2000 16:46:44]

informit.com -- Your Brain is Hungry. InformIT - Managing with Oracle Enterprise Manager (OEM) From: Using Oracle8

=========================================================== Listener name, SID, and host name are the same in the other files Must match exactly with agent address in TNSNAMES.ORA on client machine

Basic Management Tasks with OEM


As an Oracle DBA, you'll be performing several tasks on a daily basis on OEM, such as starting up shutting down the database and managing users.

Starting and Stopping Your Database


After you set up remote and local security, you can start or shut down an Oracle database from the Enterprise Manager console. To start up or shut down a database, you must have the SYSOPER or SYSDBA role. Start up the database 1. Start up and log in to the Enterprise Manager. 2. Double-click the database object to be started. If you haven't set up preferred credentials for this host, you'll get an error message. Type the correct username and password each time, or set up preferred credentials. 3. In the database property sheet that appears, choose the Startup option. 4. Click the appropriate startup option and specify the location of the INIT.ORA file. 5. Click the startup button. Shut down the database 1. Start up and log in to Enterprise Manager. 2. Double-click the database object to be stopped. If preferred credentials haven't been set up for this host, an error message occurs. Type the correct username and password each time, or set up preferred credentials. 3. In the database property sheet that appears, choose the Shutdown option. 4. Click the appropriate shutdown option and then click the shutdown button.

Managing Users and Privileges


You can easily manage users and privileges by using OEM's Security Manager component (see Figure 4.5). You can manage user information for several databases from one centralized location. Figure 4.5 : Security Manager enables you to manage users, profiles, and roles. After Security Manager successfully connects to the database, you see a tree structure with three context-sensitive objects. The database name is displayed next to the database container, and the Users, Roles,

http://www.informit.com/content/0789716534/element_004.shtml (13 of 20) [26.05.2000 16:46:44]

informit.com -- Your Brain is Hungry. InformIT - Managing with Oracle Enterprise Manager (OEM) From: Using Oracle8

and Profiles containers branch from the current database container. You can use Security Manager's User menu to create, edit, or remove existing users on a database. Manipulate roles and profiles from the menus Roles and profiles can also be similarly created, edited, and removed by using the Roles and Profiles menus. Create a user 1. From the User menu choose Create. 2. In the Create User property sheet, enter the new user's username in the Name text box (see Figure 4.6). Figure 4.6 : You can create users from the property sheet. 3. 4. 5. 6. In the Authentication section, enter a password and then re-enter to confirm it. Choose the appropriate default and temporary tablespaces for the user from the Tablespaces section. Click Create. Verify the user creation by checking the Users object in Security Manager's tree structure. This verification can also be done by logging in as the new user with the password.

Quick-edit a user 1. Right-click the username to be modified. 2. Select Quick Edit from the pop-up menu. 3. Make the desired changes to the quotas, privileges, and roles. 4. Click OK. Remove a user 1. Select the username to be removed. 2. From the User menu, choose Remove. 3. In the confirmation dialog box, click Yes. The user can also be removed by right-clicking the highlighted username and choosing Remove from the pop-up menu. The User menu can be used to give privileges to users. Assign privileges to users 1. From the User menu choose Add Privileges to Users. 2. In the Add Privileges to Users dialog box (see Figure 4.7), select the user to which privileges are to be granted. Ctrl+click additional users in the list to select more than one user. Figure 4.7 : Privileges can be assigned to users in the Add Privileges to Users dialog box. 3. Select the Privilege Type (Roles, System, or Object). 4. Select the privileges to be granted. Ctrl+click additional privileges in the list to select more than one privilege. 5. Click Apply. The Security Manager's Profile menu can be used to assign existing profiles to existing users. Assign profiles to users 1. From the Profile menu choose Assign Profile to Users. 2. In the Assign Profile dialog box (see Figure 4.8), select the user or users to whom profiles are to be assigned. Figure 4.8 : Users can be assigned profiles in the Assign Profile dialog box. 3. Select the profile to assign. 4. Click Apply. 5. Additional profiles can be assigned by repeating steps 3 and 4. Click OK when all profiles are assigned.

Using OEM's Navigator Menu to Manipulate Users and Privileges


Create a user with the Navigator menu 1. In OEM's navigator tree, click the + to the left of the Databases folder. 2. Click the + to the left of the database name.
http://www.informit.com/content/0789716534/element_004.shtml (14 of 20) [26.05.2000 16:46:44]

informit.com -- Your Brain is Hungry. InformIT - Managing with Oracle Enterprise Manager (OEM) From: Using Oracle8

3. Click the + to the left of the Users folder. 4. From the Navigator menu, select Create User. 5. Enter the User information and click OK. Copy a user between databases 1. In OEM's navigator tree, click the + to the left of the Databases folder. 2. Click the + to the left of the database name. 3. Click the + to the left of the Users folder. 4. Select the username to be copied. 5. Drag and drop the username from one database to the other database folder. Manage database user properties such as quotas, roles, and privileges 1. In the navigator tree, click the + to the left of the Database folder. 2. Click the + to the left of the database name. 3. Click the + to the left of the Users folder. 4. From the Navigator menu, select Alter User. 5. On any of the four tabbed pages (General, Quotas, Privileges, or Default Roles), select the desired types. 6. Click OK.

Managing Database Storage


You can use Storage Manager (see Figure 4.9) to perform administrative tasks associated with managing database storage, such as managing tablespaces and rollback segments and adding and renaming data files. Figure 4.9 : You can use Storage Manager to manipulate tablespaces, data files, and rollback segments. You can use Oracle's Tablespace Manager to monitor and manage database storage. It can be used to display graphically how storage has been allocated for the database segments, to defragment segments, and to coalesce free adjacent blocks. Monitoring Tablespaces Tablespace Manager's main window includes a tree list on the left and a drill-down on the right for a detailed view. You use the Tablespace Manager as follows: q Click the Tablespaces container to display the space usage for each tablespace in the instance. q Double-click the Tablespaces container for additional information about the tablespaces. q Click an individual tablespace container to display the Segments page for that tablespace. Clicking an individual segment graphically displays the space utilization for that segment. q Click a data file container to find the space usage for a data file. q Click an individual data file to display the Segments page for that data file (see Figure 4.10). Figure 4.10: Use Tablespace Manager to manage data files.

Performing Advanced Management Tasks with the Performance Pack


As a DBA, you should frequently monitor your system resources to identify contention. OEM provides various predefined charts that can help you in monitoring the usage of different resources that can contribute to contention. (For additional information on how to identify and reduce various types on contention, see Chapter 21, "Identifying and Reducing Contention.") Three resources need to be carefully monitored:
q

CPU. Every process that executes on the server needs some time slice of the CPU time to complete its task. Some processes need a lot of CPU time, whereas others don't. You should be able to identify your CPU-intensive processes.

My tuning philosophy Performance tuning shouldn't be treated as a reactive strategy; instead, it should be a preventive action based on trends detected through analysis by using tools such as the Performance Pack.
q

Disk access. Every time a process needs data, it will first look at the buffer cache to see if the data is

http://www.informit.com/content/0789716534/element_004.shtml (15 of 20) [26.05.2000 16:46:45]

informit.com -- Your Brain is Hungry. InformIT - Managing with Oracle Enterprise Manager (OEM) From: Using Oracle8

already brought in. If the data isn't found, the process will access the disk. Disk access is very time-consuming and should be minimized. Memory. Insufficient memory can lead to performance degradation. When the system falls short on memory, it will start paging and eventually start swapping physical processes.

The Performance Pack is a value-added component of the Oracle Enterprise Manager. It provides various tools to monitor and tune the performance of your database. It's important to understand that taking a point-in-time snapshot of the system doesn't do performance tuning, but it's a way to take into consideration the system performance over a period of time. You can perform three different types of tuning by using the Performance Pack components (see Table 4.5). Table 4.5 Types of tuning available through the Performance Pack Tuning Type Routine Tuning Focused Tuning What-If Tuning Description Used to identify and solve potential problems before they occur Used to resolve known performance problems Used to determine what would happen if a particular configuration change is made

The Performance Pack provides several tools (see Table 4.6) to capture, store, and analyze information so you can improve overall performance. Table 4.6 Performance Pack components and their functions Component Performance Manager Oracle Expert Function Displays tuning statistics on contention, database instance, I/O, load, and memory within predefined or customized charts Collects and analyzes performance-tuning data on predefined rules, generates tuning recommendations, and provides scripts that help with the implementation of tuning recommendations Collects performance data based on events and generates data for the Oracle Expert Displays the top 10 sessions based on any specified sort criteria Displays the free space left on each data file Displays the blocked and waiting sessions Monitors the specified conditions in the databases, nodes, and networks

Oracle Trace Oracle TopSessions Monitor Tablespace Viewer Oracle Lock Manager Oracle Advanced Events

To start the performance-monitoring applications from the OEM console, use the Performance Pack launch palette or the Performance Pack option on the Tools menu.

Using Oracle Performance Manager


Performance Manager is a tool for monitoring database performance in real-time. It provides a number of predefined charts for displaying various statistics in different formats, including tables, line charts, bar charts, cube charts, and pie charts (see Figure 4.11). Figure 4.11: Read consistency hit ratio is one type of information that can be charted. Performance Manager's Display menu includes items for seven different categories of predefined charts. Table 4.7 describes these categories and the set of charts that focus on displaying information of that category. Table 4.7 Charts used to identify contention Category Contention Database_Instance Charts Included in This Category Circuit, Dispatcher, Free List Hit %, Latch, Lock, Queue, Redo Allocation Hit %, Rollback NoWait Hit %, and Shared Server Process, Session, System Statistics, Table Access, Tablespace, Tablespace Free Space, #Users Active, #Users Waiting for Locks, and #Users Running File I/O Rate, File I/O Rate Details, Network I/O Rate, and System I/O Rate

I/O

http://www.informit.com/content/0789716534/element_004.shtml (16 of 20) [26.05.2000 16:46:45]

informit.com -- Your Brain is Hungry. InformIT - Managing with Oracle Enterprise Manager (OEM) From: Using Oracle8

Load Memory

Overview

User-Defined

Buffer Gets Rate, Network Bytes Rate, Redo Statistics Rate, Sort Rows Rate, Table Scan Rows Rate, and Throughput Rate Buffer Cache Hit %, Data Dictionary Cache Hit %, Library Cache Hit %, Library Cache Details, SQL Area, Memory Allocated, Memory Sort Hit %, Parse Ratio, and Read Consistency Hit % #Users Active, #Users Logged On, #Users Running, #Users Waiting, Buffer Cache Hit, Data Dictionary Cache Hit, File I/O Rate, Rollback NoWait Hit %, System I/O Rate, and Throughput Rate Charts created by the user

By default, information in the predefined charts is presented in the following manner: q Charts showing rates per unit of time are presented as line charts. q Charts showing ratios are presented as pie charts. q Charts consisting primarily of text information are presented as tables. q Charts displaying a large number of instances are presented as tables. The overview charts are a set of 12 predefined charts that give a good overall picture of the system (see Table 4.8). Table 4.8 Predefined charts Chart Number of Users Active Number of Users Logged On Description Shows the number of users actively using the database instance. Obtains information from the V$SESSION view. Shows the number of concurrent users logged on to the database instance, regardless of whether any activity is being performed. Obtains information from V$LICENSE. Shows the number of concurrent users logged on to the database instance and now running a transaction. Obtains information from V$SESSION_WAIT. Shows the number of users now waiting. Obtains information from V$SESSION_WAIT. Shows the buffer cache hit percentage. Obtains information from V$SYSSTAT. Shows the Data Dictionary cache hit. Obtains information from V$ROWCACHE. Shows the number of physical reads and writes per second for each file of the database instance. Obtains information from V$DBFILE. Shows the hits and misses for online rollback segments. Obtains information from V$ROLLSTAT. Shows I/O statistics including buffer gets, block changes, and physical reads per second for the database instance. Obtains information from V$SYSSTAT. Shows the number of user calls and transactions per second for the instance. Obtains information from V$SYSSTAT.

Number of Users Running

Number of Users Waiting Buffer Cache Hit % Data Dictionary Cache Hit File I/O Rate Rollback NoWait Hit % System I/O Rate

Throughput Rate

Get an overall picture of activity on a database with the Overview chart 1. In the navigator window, select the ORCL database and then click the Oracle Performance Manager icon. 2. From the Monitor menu, click Display and then choose Overview. Monitor disk access, resource contention, and memory utilization 1. Launch the Oracle Performance Manager in the context of the ORCL database, as explained in step 1 of the previous section. 2. From the Charts menu choose Define Window. 3. In the Window Name text box, provide a unique name. 4. Scroll through the list of available charts, select the chart you want, and click the << button. 5. Repeat step 4 for all the charts you need, and then click OK.
http://www.informit.com/content/0789716534/element_004.shtml (17 of 20) [26.05.2000 16:46:45]

informit.com -- Your Brain is Hungry. InformIT - Managing with Oracle Enterprise Manager (OEM) From: Using Oracle8

If the predefined charts don't suit your needs, you can create your own charts and save them for future use. Creating your own charts 1. From the Charts menu, choose Define Charts. 2. Click the New Chart button. 3. Enter a name for the new chart. 4. In the SQL Statement text box, enter a statement that will gather the statistics to display in the chart. 5. Click the Execute button. 6. Verify the results in the results field. 7. On the Display Options page, enter the required information for each variable you want to display and click the Add button. 8. Click the Apply button. 9. Click OK. 10. From the File menu choose Save Charts, and save the chart in the Repository. Recording Data for Playback You can choose to record data in a chart for analysis at a later time. The collection size varies based on the polling interval, database activity at the time, and the collection interval. Collect historical data 1. Display the charts from which you want to collect data. 2. From the Record menu choose Start Recording. 3. Provide a unique name in the Data Collection Name dialog box and click OK. 4. When finished with the data collection, choose Stop Recording from the Record menu. 5. Provide the database connect string in the Format/Playback Login dialog box. Playback recorded data 1. From the Record menu choose Playback. 2. In the Format/Playback Login dialog box, provide the connect string on the database where the formatted data is saved. 3. Select the data collection to play back and click OK.

Using Oracle Expert


Oracle Expert is a tool in the Performance Pack that you can use to tune a database. All tuning inputs and recommendations are stored in a tuning repository that allows the review and modification of the data and the rules at a later time. It has a knowledge base of tuning rules, designed through a tight relationship between the Oracle Server, Oracle Trace, and Oracle Expert development teams. It provides an explanation for all the recommendations it makes. Don't run multiple sessions of Oracle Expert against the same repository You can run multiple sessions of Oracle Expert against the same repository, but it's not recommended because it can lead to data conflicts between sessions. You should use Oracle Expert to complement your tuning experience, not as a tool to replace your function as a database performance tuner. You should instead focus on what to do with the findings and suggestions provided by Oracle Expert and enhance the rules used by Oracle Expert in analyzing the performance data. You can use Oracle Expert to tune the following: q Instance. It consists of tuning the SGA parameters (which affect the total size of the instance's system global area), I/O parameters (which affect the throughput or distribution of I/O for the instance), parallel query parameters, and sort parameters. q Application. It consists of tuning SQL and access methods, which involves determining the indexes needed and creating, modifying, and deleting them as needed. q Structure. It consists of sizing (recommendations for choosing storage parameters) and placement (recommendations on placing data files and partitioning segments). Increase the information analyzed by Oracle Expert

http://www.informit.com/content/0789716534/element_004.shtml (18 of 20) [26.05.2000 16:46:45]

informit.com -- Your Brain is Hungry. InformIT - Managing with Oracle Enterprise Manager (OEM) From: Using Oracle8

Before performing instance SGA tuning, run XPVIEW.SQL (in $ORACLE_HOME\rdbms\ admin) against the database being tuned to get better recommendations from Oracle Expert. Doing so causes Oracle Expert to collect additional information about the data-base's shared SQL segment. For Oracle Expert to perform data collection, the target database being tuned should have the following tables: dba_tab_columns, dba_constraints, dba_users, dba_data_files, dba_objects, dba_indexes, dba_segments, dba_ind_columns, dba_tables, dba_rollback_segs, dba_sequences, dba_views, dba_tablespaces, dba_synonyms, dba_ts_quotas, and dba_clusters. Oracle Expert doesn't collect information regarding index-only tables, partitioned tables, partitioned indexes, object types, object tables, and object views. Use Oracle Expert to gather tuning information (general steps) 1. Set the scope of the tuning session to tell Oracle Expert what aspects of the database to consider for tuning purposes. Oracle Expert collects the following categories of data: database, instance, schema, environment, and workload. 2. The collected data is organized in a hierarchical format. You can view and edit the rules and attributes used by Oracle Expert. 3. Oracle Expert generates tuning recommendations based on the collected and edited data. You can decide to use the recommendations, or ignore them and let Oracle Expert generate a new recommendation. 4. When you're satisfied with the recommendations, you can let Oracle Expert generate parameter files and scripts to implement the chosen recommendations. Don't tune the SYS or system schema Don't use Oracle Expert to tune the SYS or system schema. You should let Oracle tune these items automatically. Start an Expert Tuning session 1. From the File menu choose New. 2. Define the scope of the tuning session. 3. On the Collect page, specify the amount and type of data to collect. 4. Click the Collect button to acquire the required data. 5. On the View/Edit page are the rules used by Expert. You can modify the rules based on your experience. 6. On the Analyze page, click the Perform Analysis button to begin the data analysis. 7. Select Review Recommendations to review the recommendations provided by Expert. 8. If you agree with the recommendations, you can implement them by generating the requisite scripts and parameter files from the Implement page. If you don't agree with the recommendations, you'll have to change one or more rule and re-analyze (without recollecting) the data. Have enough privileges to perform some functions If the database management functions are grayed out from the menu bar, it may be because you aren't authorized to perform those functions. Reconnect as SYSOPER or SYSDBA. The collection classes to use are determined by the selected tuning categories for a tuning session. Reuse collected data When tuning multiple categories, the common classes need to be collected only once because Oracle Expert will be able to reuse the data for analysis. Start Oracle Expert 1. In the OEM map or navigator window, select a database and then click the Oracle Expert icon in the Performance Pack palette. Or double-click the Expert icon in OEM's Program Manager. 2. Connect to a tuning repository. 3. From the File menu choose New to create a new tuning session. 4. Enter the appropriate data in the dialog box pages. 5. Click OK. Permissions to use Oracle Expert The user running Oracle Expert must have SELECT ANY TABLE privilege for the database in which the repository is stored.
http://www.informit.com/content/0789716534/element_004.shtml (19 of 20) [26.05.2000 16:46:45]

informit.com -- Your Brain is Hungry. InformIT - Managing with Oracle Enterprise Manager (OEM) From: Using Oracle8

Using Oracle TopSessions


You can use the Oracle TopSessions utility to view the top Oracle sessions based on specified criteria, such as CPU usage and disk activity. Before running TopSessions for Oracle8, run $ORACLE_HOME/sysman/smptsi80.SQL to create all the supporting tables. Identify Oracle sessions that use the most CPU 1. In OEM's navigator window, select the ORCL database and then click the Oracle TopSessions icon. 2. On the Sort page of the Options property sheet (see Figure 4.12), change the Statistics Filter to User and the Sort Statistic to CPU Used by This Session. Figure 4.12: You can use the Sort page of the Oracle TopSessions Options dialog box to specify the criteria to use for monitoring sessions. 3. On the Refresh page of the Options property sheet (see Figure 4.13), select Automatic for the refresh type, set the Refresh Interval to 10, and reset the Minutes and Hours to 0. Figure 4.13: The Refresh page of the Options dialog box can be used to change the refresh type and refresh interval. 4. On the Count page of the Options property sheet (see Figure 4.14), select the Display Top N Sessions button and change the count to 10. Figure 4.14: The Count page of the Options dialog box can be used to specify the number of sessions to track. 5. Click OK to show the results (see Figure 4.15). Figure 4.15: Oracle TopSessions shows the results as specified by the resource usage criteria. < Back Contents Next >

Save to MyInformIT

Contact Us | Copyright, Terms & Conditions | Privacy Policy | Advertising Copyright 2000 InformIT. All rights reserved.

http://www.informit.com/content/0789716534/element_004.shtml (20 of 20) [26.05.2000 16:46:45]

informit.com -- Your Brain is Hungry. InformIT - Managing Your Database Space From: Using Oracle8

You are here : Home : Using Oracle8

Managing Your Database Space


Exact Phrase All Words Search Tips

< Back

Contents

Next >

Save to MyInformIT

From: Using Oracle8 Author: David Austin Publisher: Que More Information

Space Management Fundamentals


r r r

Suggested Tablespaces Understanding File Types: File Systems Versus Raw Devices Understanding the Benefits of Striping Data Creating a Tablespace Setting Default Storage Values Changing the Characteristics of a Tablespace Dropping Tablespaces Comparing Dynamic and Manual Extent Allocation Releasing Unused Space Defragmenting Free Space

Adding New Tablespaces


r r

Tablespace Management
r r

Extent Allocation
r r r

q q q q q

Identify different segment types Design tablespaces for different segment types Manage tablespaces Make effective use of physical file structures Control unused and free space

Space Management Fundamentals


The basic storage unit in an Oracle database is the Oracle block (or database block). This is the smallest unit of storage that will be moved from disk to memory and back again. Oracle block sizes range from 2KB to 32KB. You obviously couldn't store a complete table on a single block, so blocks are grouped into segments to store amalgams of data. There are various types of segments: The DBA's role in space management Space management is an ongoing task for most of you. Unless you have a completely static database, tables and indexes will regularly grow, or shrink, in size. You need to ensure that sufficient space is available for this to occur without interruption to the ongoing processing. You also need to help ensure that the space is being used efficiently.
q

Data segments contain rows from a single table or from a set of clustered tables. SEE ALSO See how to use clusters on Index segments contain ordered index entries.

http://www.informit.com/content/0789716534/element_005.shtml (1 of 20) [26.05.2000 16:47:03]

informit.com -- Your Brain is Hungry. InformIT - Managing Your Database Space From: Using Oracle8
q q q

LOB segments contain long objects. LOB index segments contain the special indexes for LOB segments. Rollback segments store "before" images of changes to data and index blocks, allowing the changes to be rolled back if needed. Temporary segments hold the intermediate results of sorts and related processing that are too large to be completed in the available memory. A single cache segment (also known as the bootstrap segment) holds boot information used by the database at startup.

You may have very large segments in your database, and it may be impossible to put the whole thing into a set of contiguous Oracle blocks. Oracle therefore builds its segments out of extents, which are sets of logically contiguous blocks. "Logically contiguous" means that the operating system and its storage subsystems will place the blocks in files or in raw partitions so that Oracle can find them by asking for the block offset address from the start of the file. For example, block 10 would begin at the (10XOracle block size) byte in the data file. It doesn't matter to Oracle if the operating system has striped the file so that this byte is on a completely different disk than the one immediately preceding it. The database always accesses blocks by their relative position in the file. A large segment may have several, or even several hundred, extents. In some cases it will be too big to fit into a single file. This is where the last Oracle storage construct plays a part. Rather than force users to deal with individual files, Oracle divides the database into logical units of space called tablespaces. A tablespace consists of at least one underlying operating system file (or database data file). Large tablespaces can be composed of two or more data files, up to 1,022 files. Every segment or partition of a partitioned segment must be entirely contained in a single tablespace. Every extent must fit entirely inside a single data file. However, many extents can comprise a partition or a non-partitioned segment, and the different extents don't all have to be in the same data file. Only one type of database object, a BFILE (binary file), is stored directly in an operating system file that's not part of a tablespace. There are six reasons to separate your database into different tablespaces: q To separate segments owned by SYS from other users' segments Recommendation for using multiple tablespaces Although Oracle doesn't prevent you from creating all your segments in a single tablespace, Oracle strongly recommends against it. You can control your space more easily by using different tablespaces than you can if everything were placed in a single tablespace. Multiple files let you build your tablespace in a manner that helps you improve space usage as well as database performance.
q q q q q

To manage the space available to different users and applications To separate segments that use extents of different sizes To separate segments with different extent allocation and deallocation rates To distribute segments across multiple physical storage devices To allow different backup and related management cycles and activity

The first reason to use multiple tablespaces is to keep the segments owned by user SYS away from any other segments. The only segments SYS should own-and the only ones that Oracle will create and manage for you-are those belonging to the data dictionary. The data dictionary The data dictionary is the road map to all the other database segments, as well as to the data files, the table-space definitions, the Oracle user-names, passwords and related information, and many other types of database objects. The dictionary needs to be modified as objects are added, dropped, or modified, and it must be available at all times. By keeping it in its own tablespace, you're less likely to run out of room (which would bring the database to a complete halt if it prevented SYS from modifying the dictionary). A second reason to use different tablespaces is to control how much space different schemas can take up with their segments. Each user can be assigned just so much space in any tablespace and doesn't need to be assigned any space at all in some tablespaces. Some end users may have no space allocated to them at all because their database access consists solely of manipulating segments that belong to the application owner. The third reason to manage your segments in different tablespaces has to do with space usage by the extents
http://www.informit.com/content/0789716534/element_005.shtml (2 of 20) [26.05.2000 16:47:03]

informit.com -- Your Brain is Hungry. InformIT - Managing Your Database Space From: Using Oracle8

belonging to different segments. As you can imagine, most databases have segments that are large and some that are small. To make effective use of the space, you would assign different-sized extents to these objects. If you mix these extents in a single tablespace, you may have problems if you need to drop or shrink the segments and try to reuse the freed space. Consider the example of a kitchen cabinet where you keep all your canned goods on one shelf (see Figure 5.1). After a trip to the grocery store, you can almost fill the shelf with cans of different sizes. Suppose that during the week you take out half a dozen or so of the smaller cans. After another trip to the grocery store, you couldn't simply put large cans in the spaces where the small ones came from, even if there were fewer large cans. You might have enough space if you rearrange the cans, but none of the spots is initially big enough for a single, large can. Figure 5.1 : You may have space management problems if you try storing different-sized objects together. Now suppose that you take out only large cans and then put medium-sized cans in their place. You still have some space around the medium-sized cans, but not enough to store even a small can. Again, you could find room for a small can, but only by shifting things around, as shown in Figure 5.2. Figure 5.2 : You'll run into space problems when replacing objects of different sizes. When large and small extents try to share limited tablespace storage space, they can "behave" like the cans. Space can be freed when extents are removed, but it's not necessarily of a useful size. Unlike you reorganizing the kitchen cupboard, however, the database can't easily shift the remaining extents around to make the space more useful. When space in a tablespace consists of an irregular checkerboard of used and free block groups, it's said to be fragmented. Much of the free space may never be reusable unless the extents are somehow reorganized. You can prevent such fragmentation by allowing only extents of the same size in the tablespace. That way, any freed extent is going to be exactly the right size for the next extent required in the tablespace. Avoiding different free space extent sizes requires different tablespaces To allow different-sized extents in your database without mixing them in the same storage space, you need different table-spaces. A variant of the fragmentation problem provides a fourth value for multiple tablespaces. Some segments are very unlikely to be dropped or truncated. For example, a mail-order business' CURRENT_ORDERS table unlikely will do anything but grow-or at least stay about the same size-as new orders are added and filled orders removed. On the other hand, you may have a data warehouse in which you keep the last five years' order records. If you build this table as a series of 60 month-long partitions, you'll be able to drop the oldest one each month as you add the latest month's records. Tables with different propensity to fragment free space The CURRENT_ORDERS table will never contribute to a fragmentation problem because its extents never go away. The data warehouse partitions, however, are dropped on a regular basis, so they'll have a high propensity to cause fragmentation. Thus, the fourth reason to keep segments in different tablespaces is to separate segments with a low, or zero, propensity to fragment space and those with a high likelihood of causing fragmentation. This way, the long-term objects, if they do need to grow, won't have to hunt around for free space of the right size. The fifth reason to use different tablespaces is to help distribute the data file reads and writes across multiple disk drives. You can use lots of different data files in a tablespace and place them on different disk drives, but you may not be able to control which extents are placed in which file after doing that. If you're lucky, the amount of disk access will be even across all the drives. If you aren't so lucky, you might have a situation where the two busiest extents in your database are in the same file. For example, if the mail-order house is going into its busy holiday season sale period, it will probably need to use the empty blocks at the end of the CURRENT_ORDER table for the additional orders. If the extent holding these blocks is in the same data file as the index blocks where the newest order numbers are being saved, you'll have excessive disk contention; each new order will use a block from the table extent and a block from the index extent. If you keep segments that will likely cause concurrent disk access (such as tables and the indexes on those tables) in different tablespaces, you can guarantee that the files making up the different tablespaces are stored on separate disk drives. A database-management issue is the final reason to use different tablespaces. During its life, a database will need to be backed up and possibly repaired if a disk crashes or otherwise corrupts data.

http://www.informit.com/content/0789716534/element_005.shtml (3 of 20) [26.05.2000 16:47:03]

informit.com -- Your Brain is Hungry. InformIT - Managing Your Database Space From: Using Oracle8

Tablespace damage can be pervasive If any part of a tablespace is damaged, the entire tablespace becomes unusable until the damage is fixed. If every segment belonging to an application were stored in a single tablespace and that tablespace was damaged, nobody could use that application. However, if you split segments that belong to different functional areas of the application (such as order entry and accounts receivable) and use different tablespaces for these, a data file problem may not be so intrusive. A failure with a data file in the accounts-receivable tablespace could be undergoing repair without any impact being felt by the order takers using the order-entry tablespace. Similarly, a tablespace containing tables that rarely change-such as lookup tables for state codes, part number and part name references, and so on-may not need regular backing up, whereas CURRENT_ORDERS may need very frequent backups to reduce the tablespace recovery time if there were a failure. Backing up tablespaces on different schedules To minimize the time it takes to back up less-often used segments, back up different table-spaces on different schedules. In fact, you can define a truly read-only tablespace to the database and then back it up only once when you've finished loading the required data into it. Of course, if you mix the static tables and the busy tables in the same tablespace, you have to back them all up equally often. SEE ALSO To learn more about objects, LOBs, and BFILEs,

Suggested Tablespaces
I recommend that every DBA create certain tablespaces for a production database. The reasons for these different tablespaces stem from the previous discussion. Let's begin with the SYSTEM tablespace, the only mandatory tablespace in every Oracle database. SYSTEM Tablespace Every Oracle database must have one tablespace-SYSTEM. This is where the user SYS stores the data dictionary information needed to manage the database. You should create additional tablespaces based on the expected use of your database. If you don't do this and use only the SYSTEM tablespace, you'll violate most of the reasons for using different tablespaces recommended in the previous section. Several other things will happen as well: Maintain the integrity of the SYSTEM tablespace You should never need to create any object directly in the SYSTEM tablespace, regardless of which userid you use to connect to the database. This table-space should be reserved for the recursive SQL executed behind the scenes as part of database creation or the execution of standard SQL statements.
q q

q q q q

You won't keep the segments owned by SYS out of harm's way by other users. You'll have to allow everyone who needs to create objects to take space from SYS in the SYSTEM tablespace. You'll cause fragmentation because all extents of all sizes will share the same space. You'll have a mix of segments with high and low propensity to fragment space in the same space. You can't easily avoid having high-usage extents stored on the same disk drive. You have a single point of failure for the whole database. If the data file in the SYSTEM tablespace is lost or damaged, the entire database will shut down.

I hope this list of possible problems has convinced you to use additional tablespaces, such as those described in the next few pages. Rollback Segment Tablespaces The tablespace for rollback segments will contain all the database's rollback segments with one exception-the SYSTEM rollback segment. It is maintained automatically in the SYSTEM tablespace. Keep your rollback segments separate from other database objects for a number of reasons: q They can shrink automatically and therefore create fragmented free space. Rollback segment shrinkage

http://www.informit.com/content/0789716534/element_005.shtml (4 of 20) [26.05.2000 16:47:03]

informit.com -- Your Brain is Hungry. InformIT - Managing Your Database Space From: Using Oracle8

Although you can define rollback segments to shrink by themselves if they grow too large, you can also use a special ALTER ROLLBACK SEGMENT command option to shrink them manually. SEE ALSO For details of the ALTER ROLLBACK SEGMENT command, They don't need quotas, so their space can't be managed by schema quotas. They're needed concurrently with data blocks during transaction processing, so they can lead to disk contention.

q q

Rollback segments can be defined to shrink themselves automatically if they grow larger than needed. Thus, they have a very high propensity for fragmenting the free space. Fortunately, a rollback segment is required to use the same size extents as it grows and shrinks, so it can reclaim any empty space it leaves behind when shrinking. Maintaining multiple rollback segment tablespaces If all the rollback segments in a single tablespace are sized identically, they can even claim the space released by the other segments. You shouldn't have many problems with space overuse or waste in a rollback segment tablespace with this arrangement, as long as they don't try to grow too much all at the same time. Should you need roll-back segments of different sizes, therefore, you should consider building a new rollback segment tablespace and keeping the larger ones in one tablespace and the smaller ones in the other. Another reason for using a tablespace for rollback segments is that you create them for use by anyone. Users don't need to have a space quota on the tablespace where the rollback segments reside. This allows the rollback segments to exist without having to contend for space with the segments that belong to an application (suddenly having their free space taken up by a table created with excessive storage requirements, for example). The final reason for keeping rollback segments in their own tablespace(s) is that you can help avoid contention for the disk. When a table is being modified, not only does Oracle modify the contents of the table block(s), but a set of rollback entries is also stored in a rollback segment. If the table and the rollback segment belonged to the same tablespace, the blocks being changed in each of them could be in the same data file on the same disk. Temporary Tablespaces Temporary tablespaces are for the temporary segments built by Oracle during sorting and related operations, when the amount of memory available to the server is insufficient to complete the task. The important characteristic of temporary segments is that Oracle creates and manages them without any input from the users. Consequently, unlike the other segments (except the bootstrap segment), there's no CREATE syntax to identify which tablespace the segment is created in, nor what size or number of extents to use. Neither is there any DROP command, or any other command, that lets you release the space from temporary segments. Default behavior of temporary segments Temporary segments obtain their storage characteristics solely from the tablespace definition. Hence, unless you have a tablespace for other types of segments that need identical storage characteristics to your temporary segments, you'll need a dedicated, temporary segment tablespace. Even if you have a tablespace that appears to be able to share storage characteristics for regular segments and for temporary segments, you may want to create a separate one for your temporary segments. The reason for this is that temporary segments are so named because of their default behavior. Because they're being used just to store intermediate values of an ongoing sort, they aren't really needed by the server process when the sort is complete. Hence, they can easily fragment the space in a tablespace as they are created and dropped. By default, temporary segments are dropped as soon as their work is complete. Obviously, the dropping of these segments on a transaction boundary makes them very prone to fragmenting free space. You can help mitigate the fragmentation problems by ensuring that the extents used by the temporary segments are all exactly the same size. The ephemeral nature of temporary segments by itself makes them candidates for their own tablespace. However, there's one further consideration-you can create a tablespace or alter an existing tablespace to contain only temporary segments. By doing this, you change the default behavior of temporary segments. Rather than drop the segment when it's no longer needed, the server simply records in a special data dictionary table that the extents in the segment are now free. Any other server needing to use temporary space can query this table and locate available temporary extents. In this way, the database will save the space allocated to temporary
http://www.informit.com/content/0789716534/element_005.shtml (5 of 20) [26.05.2000 16:47:03]

informit.com -- Your Brain is Hungry. InformIT - Managing Your Database Space From: Using Oracle8

segments indefinitely and assign it for use as needed by various servers. Reduce database overhead with TEMPORARY-type tablespaces The characteristic of preserving temporary segments in TEMPORARY-type tablespaces saves a lot of recursive SQL associated with creating and managing the space for temporary processing. In the long run, reducing recursive SQL speeds the processing of the entire data-base. It's essential that you have a tablespace dedicated to temporary segments if you want to take advantage of this behavior. You aren't allowed to place anything other than temporary segments in a table-space defined as a temporary type. As with the other tablespaces being discussed, you aren't limited to just one temporary tablespace (other than SYSTEM). You can add as many as you need to avoid contention for the space in a single temporary tablespace. Temporary space is assigned to users as part of the user definition. If you need to, you can subdivide your user community to access different temporary tablespaces. A final benefit of keeping your temporary tablespaces separate from other tablespaces is that because the use of temporary space is the result of processing being performed behind the scenes for the user, you don't need to assign space in temporary tablespaces to your users. You can keep your temporary tablespace(s) clear of other user-created segments by disallowing any storage on them. As with the SYSTEM tablespace, this will avoid problems that could occur should the required space be taken up by segments that don't belong in this reserved space. User Data Tablespaces For most of you, the largest amount of storage in your database will be taken by rows in your applications' tables. You should realize that these data segments don't belong in the SYSTEM tablespace, nor should they be stored with your rollback segments or your temporary segments. I recommend that you build one or more tablespaces to store your database tables. You would need more than one user data tablespace for a number of reasons, all related to the discussion in the section "Identifying Tablespace Uses": q Segment and extent sizes. It's improbable that all application tables will need to be the same size, or even have same-sized extents. To avoid fragmentation, you should place tables into tablespaces only with other tables that use the same extent size. q To allow you to manage them differently. If your database supports more than one application, you may want to protect them from one another by using separate tablespaces for them. In this way only one set of your users would be affected by a disk problem in a tablespace. Users of the applications supported in the unaffected tablespaces can continue their work while the repair work is done to the damaged tablespace. q To keep volatile tables away from static (or almost static) tables. This way you can organize your backup strategy around the needs of each tablespace-backing up the tablespaces with busy tables more frequently than those with tables less busy. For those tables that never change, or change very infrequently, you can make the tablespaces that hold them READ_ONLY. Such a tablespace needs to be backed only once following its conversion to this status. Managing tablespace extent size You may want to standardize your tables to three or four extent sizes. This will reduce the number of different tablespaces you'll need to manage while allowing you to realize the benefits of having all the extents in a tablespace be the same size. In particular, you won't have to concern yourself with the frequency at which extents are added and dropped. Such activity won't lead to poor space allocation because every dropped extent leaves free space exactly the same size a new extent would require.
q

To place your very busy tables in tablespaces different from each other. This way you can avoid the disk-drive contention that could occur if they share the same disks.

By the time you finish planning your user data tablespaces, you may have divided them for a combination of the reasons discussed here. It wouldn't be unreasonable to have two or three tablespaces holding tables with same-sized extents (each with a different backup frequency requirement), and another two or three with the same extent sizes containing tables that have the same backup requirements but have a high contention potential. Index Tablespaces Indexes on tables are often used by many concurrent users who are also accessing the tables as part of the same transaction. If you place the indexes in the same tablespace as the tables they support, you're likely to cause disk
http://www.informit.com/content/0789716534/element_005.shtml (6 of 20) [26.05.2000 16:47:03]

informit.com -- Your Brain is Hungry. InformIT - Managing Your Database Space From: Using Oracle8

contention; this would occur as queries retrieved index blocks to find where the required rows are stored and then retrieved the required data blocks. To avoid such contention, you should create a separate set of tablespaces for your indexes. As with tables, you may find that you need to size your index extents differently, and that you may have indexes supporting tables from different applications. Just as with your user data tablespaces, therefore, you should plan on building multiple index tablespaces to support different extent sizes and backup requirements and to maintain application independence in case of disk failure. If you use the Oracle8 partitioning option, you may need to revise your index and user-data tablespace design. In some cases it's beneficial to build locally partitioned indexes in the same tablespaces as the parent partition. This helps maintain the availability of the partitions during various tablespace maintenance activities. SEE ALSO For additional details on rollback segment management,

Understanding File Types: File Systems Versus Raw Devices


Some operating systems allow you to create raw partitions and use these for your Oracle database files instead of standard file-system files. In certain cases, raw devices can offer you improved performance. In addition, if you're using a UNIX system that doesn't support a write-through cache, you'll have to use raw devices to ensure that every write performed during a checkpoint or a log buffer flush is actually written to the physical disk. Raw devices and Oracle Parallel Server If you're going to use the Oracle Parallel Server option on UNIX or Windows NT, you must create all your database files-including the redo log files, control files, and data files-on raw devices. The various instances can't share the files if you don't do this. There are some drawbacks to raw devices: q On UNIX, sequential data reads from a raw device can't take advantage of read-ahead techniques. This means that a full table scan may be slower if other users are accessing different Oracle blocks on the same disk as the table being scanned. q Raw partitions aren't as flexible as file-system files. You're generally restricted to a certain number and, hence, fixed partition sizes on each disk. Also, you may not have permission as a DBA to create new raw partitions. This means that you have to work with a system administrator when you need to add a file. Although this may be easy enough when you're planning for a new database or a known expansion, it may not be convenient when you need to add or replace a data file in an emergency. Your system administrator may have to prebuild spare raw partitions for you to use in such emergencies. This approach, in turn, requires you to standardize on partition (and, therefore, on data file) sizes. As with extent sizes, you may want to simplify this approach by standardizing just three or four partition sizes across the whole database.

Understanding the Benefits of Striping Data


In some situations, no matter how well you've segregated your data into separate tablespaces, particularly busy tables or indexes will cause "hot spots" on one or more disks. To resolve this, you may need to build your data files by using an operating system striping mechanism. Such mechanisms include logical volume manager (LVM) software or redundant arrays of inexpensive disks (RAID). By striping the data file across many disks, you reduce the likelihood that any particular table or index will have all its busiest blocks on one single disk. If you decide to stripe your data files, you need to determine an appropriate stripe size. Optimal stripe sizes depend on a number of factors, the three key factors being the implementation of the striping software, the Oracle block size, and the type of database processing against the data. Stripe sizes for query-intensive data If your database is query-intensive- that is, the users are usually executing queries and very infrequently performing INSERTs, UPDATEs, or DELETEs-you need a stripe size that will support sequential processing of data. This will help queries that need to access indexes via range scans (many records with the same value or a set of values between an upper and lower bound) or tables via full table scans. You should set the stripe size for such databases to be a minimum of two times the value of the parameter DB_FILE_MULTIBLOCK _READ_COUNT and, if larger, to an integer multiple of this parameter's value. This will help ensure that all the blocks requested in a single read, when performing a full table scan, will be on the same disk, requiring only one disk read/write head to be moved to the required starting position.
http://www.informit.com/content/0789716534/element_005.shtml (7 of 20) [26.05.2000 16:47:03]

informit.com -- Your Brain is Hungry. InformIT - Managing Your Database Space From: Using Oracle8

Operating system or disk-striping mechanisms differ widely between vendors. Some offer large, guaranteed cache areas used for reads, writes, or both. A cache can overcome some performance slowdowns that can occur when you need to sequentially read blocks that are scattered across many different disks. Others, such as RAID, provide striping as a side benefit of various levels of disk failure resilience, but can slow certain activities to provide you protection against disk failure. Certain levels of RAID are better left unused for particular file types, such as those with large amounts of data written to them sequentially. For example, redo logs, while not part of a tablespace, may suffer a performance penalty if stored on RAID Level 5. If you have any tables that collect data in a similar sequential fashion, however, you should also try to avoid placing their data files on RAID Level 5 devices. Oracle block size is an important factor when striping data files that contain tables or indexes that will typically be accessed via random reads or random writes. You'll usually see this type of activity when the database is used primarily for transaction processing. In these cases, you should plan to make your stripe size at least twice the size of an Oracle block but, all things being equal, not too much larger. Whatever stripe size you choose, however, make sure it's an integer multiple of your Oracle block size.

Adding New Tablespaces


You automatically create your SYSTEM tablespace when you build your database. As discussed earlier in this chapter, you should add further tablespaces to meet your database's specific requirements. The following sections go over the process of creating tablespaces with various characteristics.

Creating a Tablespace
The very first tablespace you create is the SYSTEM tablespace, always part of an initial database creation. Additional tablespace creation isn't that much different from the SYSTEM tablespace creation. As with the CREATE DATABASE command, the CREATE TABLESPACE command uses a DATAFILE clause to identify the data file(s) and size(s) you want to associate with the tablespace. The syntax for the CREATE TABLESPACE command is as follows:

Identifies file(s) to be used and their characteristics Sets minimum size of used and free extents in table-space Determines whether certain SQL commands will avoid creating standard redo log entries Controls extent behavior for segments created without defined storage options Determines status of table-space after creation Defines tablespace to hold regular segments Defines tablespace to hold only temporary segments We'll examine the DEFAULT STORAGE clause in the following section. In the meantime, look at the DATAFILE clause in more detail. This clause can be applied to any tablespace's data files (including the SYSTEM tablespace), although most DBAs are content to use it simply to name and size the data file(s) for this tablespace. The DATAFILE clause's full syntax is as follows:

http://www.informit.com/content/0789716534/element_005.shtml (8 of 20) [26.05.2000 16:47:03]

informit.com -- Your Brain is Hungry. InformIT - Managing Your Database Space From: Using Oracle8

DATAFILE filename [SIZE integer [K|M]] [REUSE] [AUTOEXTEND OFF|ON [NEXT integer [K|M]] [MAXSIZE [UNLIMITED|integer [K|M]]]] Recall from the earlier section, "Understanding File Types: File Systems Versus Raw Devices," that you can use native file system files or raw partitions for your tablespace's files. The data file's name will therefore be a file-system filename, a raw partition name, or possibly a link name pointing to one or the other types of file. In the case of a file-system file, the file will be created for you unless you use the REUSE clause. In this case, Oracle will create the file if it doesn't already exist, but will overwrite an existing file as long as the SIZE clause, if included, matches the size of the existing file. If you don't specify REUSE and the file already exists, you get an error message, and the tablespace won't be created. The REUSE option can be destructive Be careful with REUSE; any current entries in the file will be overwritten and lost when it's implemented. A raw partition can always be reused, destroying its existing content-even if you don't include the REUSE keyword. If you name a raw partition (directly or via a link), the partition must already exist; otherwise, Oracle will attempt to create a standard file with the name of the partition. Because Oracle expects raw partitions to exist before being named in the CREATE TABLESPACE command, the REUSE clause really has no effect. The SIZE clause with raw partitions must be a few blocks smaller than the actual partition size; this allows space for operating system header information. Two operating system blocks are usually sufficient. Simplify your raw partition sizing You may want to keep your arithmetic simple when sizing raw partitions for Oracle files by allowing 1MB for the overhead in each partition. Thus, you would create a 101MB partition to hold a 100MB file. The AUTOEXTEND option determines whether a data file can grow automatically should a new extent be required by a segment and there's an insufficient number of contiguous free blocks. You don't have to use this clause (when you create the tablespace) to be able to grow your tablespace, as discussed later in the "Adding and Resizing Data Files" section. If you decide you want your files to be able to grow automatically, you should be aware of the impact of the following behaviors: q If there are multiple files with the AUTOEXTEND option in a single tablespace, the file chosen to grow when more space is needed will depend on a couple of characteristics. Oracle will try to extend the file that can be extended least to obtain the required space. If this results in a tie between two or more files, the one furthest from its maximum size will be extended. If this also results in a tie, the files will be extended in a round-robin fashion as more space is needed. q If the NEXT option isn't chosen, the files are extended one Oracle block at a time. q If you don't set an upper limit with the MAXSIZE option or use MAXSIZE UNLIMITED, the file can grow indefinitely until it reaches the limits of the physical storage device. q You can allow files in raw partitions to grow, but the data will overwrite the adjacent partition(s) if the partition size isn't large enough to contain the extended file; this destroys the contents and integrity of some other file. SEE ALSO For more information on creating a database, To learn about temporary segments and how they're used in TEMPORARY-type tablespaces,

Setting Default Storage Values


As noted in the earlier sections "Identifying Tablespace Uses" and "Suggested Tablespaces," a tablespace should ideally contain only segments with equal-sized extents. One way to help you maintain such a scheme is to assign to each tablespace the desired storage options as defaults. The CREATE TABLESPACE command's DEFAULT STORAGE clause is the means to achieve this. Your default storage values can be overridden! After a user has the privileges necessary to build segments, you can't prevent them from overriding the tablespace default storage values. You need to take on the responsibility of building all segments yourself if you don't want to risk this. To reduce the burden of work for your-self you can have the users build script files containing the required CREATE commands, which you can simply execute on their behalf. Of course, you should check that
http://www.informit.com/content/0789716534/element_005.shtml (9 of 20) [26.05.2000 16:47:03]

informit.com -- Your Brain is Hungry. InformIT - Managing Your Database Space From: Using Oracle8

Whenever a database segment such as a table, a rollback segment, or an index is created, a set of storage-related information is activated or stored with the segment definition. This storage information defines the size of the first extent belonging to the segment, the size of the second extent, the size of the subsequent extents, and the initial and the maximum number of extents that will be assigned to the segment. In the case of rollback segments, there's also a value associated with the optimal size of the rollback segment, which, if used, will cause extents to be dropped automatically if the overall size exceeds the desired maximum. SEE ALSO To learn more about managing rollback segments and the OPTIMAL storage option, Although each user who creates a segment can assign these storage values individually, they also can be inherited from the tablespace's definition. Temporary segments are a little different in that users don't get to create these; they're built as needed by the system on behalf of a user and, as such, always inherit the tablespace storage values. If you've built your tablespaces such that each one is designed to hold only one extent size, you can define the tablespace to provide this size by default. You can then advise those users who create segments (if it's someone other than yourself) that they shouldn't include the STORAGE clause in their CREATE statements. This not only simplifies their work, but keeps your tablespace extents defined as you planned. You define the inheritable storage values for a tablespace with the DEFAULT STORAGE clause. Here is the syntax for that clause: DEFAULT STORAGE (

Sets size of initial extent in bytes, with optional K or M to specify kilobytes or megabytes Sets size of second extent in bytes, with optional K or M Defines increase, measured as a percentage, by which each extent beyond the second will grow Sets number of extents each segment will be assigned when created Sets greatest number of extents that segment will be assigned You need to set INITIAL equal to NEXT and PCTINCREASE equal to 0 in order for the tablespace to create every extent, by default, with the same size. Remember that even though you set these defaults, every CREATE statement that builds a segment in the tablespace can override them. This is true even if you allow users to include a STORAGE clause simply to change the number of preliminary or maximum extents (MINEXTENTS and MAXEXTENTS). As soon as they can use a CREATE command, you can't restrict what's included in the related STORAGE clause. The following listing shows a command being used to create a tablespace with three data files, one of which is auto-extendible, and with a default storage clause to build all extents with 10MB of storage: CREATE TABLESPACE extra_room DATAFILE '/d1/oracle/exrm01.dbf' SIZE 1000M, '/d2/oracle/exrm02.dbf' SIZE 1000M, '/d3/oracle/exrm03.dbf' SIZE 1000M AUTOEXTEND ON NEXT 10M MAXSIZE 2000M DEFAULT STORAGE ( INITIAL 10M NEXT 10M PCTINCREASE 0) /

http://www.informit.com/content/0789716534/element_005.shtml (10 of 20) [26.05.2000 16:47:03]

informit.com -- Your Brain is Hungry. InformIT - Managing Your Database Space From: Using Oracle8

Tablespace Management
After you create your tablespaces, you may find that they aren't quite what you needed. To rectify this situation, you can drop and recreate the tablespace. In some cases you can modify it. The latter tends to be the easier solution if segments are already created in the tablespace, because dropping such a tablespace generally requires you to find a way to save and reload these segments.

Changing the Characteristics of a Tablespace


You can see a tablespace's current characteristics by viewing the data dictionary table DBA_TABLESPACES. Many of these characteristics can be changed with the ALTER TABLESPACE command. The syntax for the command is as follows: ALTER TABLESPACE tablespace_name option; Table 5.1 summarizes the options available with the ALTER TABLESPACE command. (You can only use one option at a time with the command.) Because some of the options are a little more complex than you might infer from Table 5.1, the following sections explain why you might want to use them. Table 5.1 ALTER TABLESPACE options Option: OFFLINE ONLINE BEGIN BACKUP END BACKUP LOGGING or NOLOGGING RENAME DATAFILE COALESCE MINIMUM EXTENT READ ONLY READ WRITE TEMPORARY PERMANENT DEFAULT ADD DATAFILE Purpose: Makes a tablespace unavailable for use and prevents access to its contents Returns a tablespace from OFFLINE to ONLINE accessible status Readies the files in the tablespace for hot backup Returns the tablespace's files to normal status following a hot backup Sets the default logging behavior of new objects created in the tablespace Identifies the new name of a data file to a tablespace when the file itself has been changed in the operating system Coalesces contiguous areas of free space into a single free extent Sets the minimum size of any extent, used or free, in the tablespace Prevents further writes into the tablespace Allows writes into the tablespace after it's read-only Converts a tablespace to one that holds only temporary segments Converts a temporary tablespace to a permanent one Changes the default extent characteristics assigned to any new STORAGE segments built in the tablespace Creates one or more additional data files for the tablespace

Removing Access to a Tablespace You may need to prevent access to a tablespace for a number of reasons. For example, you may want to back it up without users being able to change its contents, or you may need to perform maintenance or recovery on one of its data files. You can take a tablespace offline to prevent further read and write access. The ALTER TABLESPACE OFFLINE command that you use to accomplish this has three options: NORMAL, TEMPORARY, and IMMEDIATE. When you take a tablespace offline with the NORMAL option, Oracle immediately prevents further retrieval from that tablespace. However, it will complete a checkpoint on its data files before shutting it down completely; any changed blocks belonging to the tablespace still in the database buffer cache will be copied back to disk. This results in an internally consistent tablespace, so it can be brought back online at any time without any further processing. Bringing a tablespace back online

http://www.informit.com/content/0789716534/element_005.shtml (11 of 20) [26.05.2000 16:47:03]

informit.com -- Your Brain is Hungry. InformIT - Managing Your Database Space From: Using Oracle8

The ALTER TABLESPACE ONLINE command will bring an offline tablespace back online, provided that it was successfully check-pointed when it went offline and that all its files are currently online. If one or more of these conditions isn't true, the data file(s) will need recovery before the tablespace can be brought back online. Tablespace and data file recovery are discussed in Chapter 14, "Performing Database Recovery." The TEMPORARY and IMMEDIATE options of the OFFLINE command don't necessarily complete checkpoints. This can result in a tablespace inconsistent with the rest of the database that therefore may need media recovery when it's brought back online. To guarantee that the redo information required for this recovery is available when needed, the database must be running in ARCHIVELOG mode. The difference between TEMPORARY and IMMEDIATE is that the former will attempt to complete checkpoints on all the files, ignoring any not available for writes, whereas IMMEDIATE won't even attempt to process any checkpoints. Hot Backups of a Tablespace A hot tablespace backup is one made while access to the tablespace's data files continues. Even though making a copy of all data files associated with a tablespace may take a number of minutes, Oracle will allow users to read blocks from those files and modify those blocks, as well as allow DBWR to write the changes back to disk. This can result in apparent anomalies in the backup set. A table with blocks in two different data files could have some blocks in each file modified by a single transaction. The backup copy of one file could contain blocks as they were before the change, whereas the backup of the second file could contain changed images of other blocks. Oracle can resolve such anomalies by applying redo records to the backup files if they're used to replace damaged online files. To do this, the file needs to record the earliest time at which a block may have been changed but not copied into the backup file. This information is automatically available in a file header block, but normally this information will change over time. To prevent such a change from occurring, so as to lock in the time at which the physical backup begins, Oracle needs to freeze the header block for the duration of the backup. As the DBA, you need to issue ALTER TABLESPACE...BEGIN BACKUP before starting the physical backup of files in the tablespace. This will accomplish the freeze of the header blocks in the data files belonging to the tablespace as discussed earlier. You need to unfreeze these blocks when the backup is completed. You can achieve this with the ALTER TABLESPACE...END BACKUP command. Although you can place a number of tablespaces in backup mode simultaneously, you should understand one other characteristic of a tablespace's backup mode. While in backup mode, Oracle has to create additional redo information to guarantee data consistency within blocks. Block consistency during hot backups During a hot backup of a data file, it's possible for the operating system to copy different parts of an Oracle block to the backup medium in two separate read/write operations. If DBWR happened to write a new image of a block to the data file between the two operations, the backup would contain a fuzzy block image-part of the block would represent integral data at one point in time, while the remainder of the block would contain data from a different time. To ensure that a complete valid block image can be restored when recovering from this backup, Oracle places a complete block image into the redo log before any changes can be made to a block from a tablespace in backup mode. When recovering from the log, this valid block image is first copied over the possibly inconsistent block from the backed-up data file, and then the changes recorded in the redo are applied as usual. The redo logs needed to bring the data back to a consistent state must be available in order for the backed-up files to be useful in a recovery effort. To ensure this, you have to be running your database in ARCHIVELOG mode, which guarantees that all redos written to the online redo logs are copied elsewhere before the entries are overwritten by later transactions. You'll receive an error message if you try to place a tablespace into backup mode and you aren't archiving your redo. Controlling Logging Behavior A number of SQL commands can execute in Oracle without generating redo logs. These commands work with an existing set of data and therefore can be re-executed if they fail against the same data source. For this reason, you wouldn't have to rely on the existence of redo entries if there were an instance failure part of the way through the execution of the command. In addition, the SQL*Loader utility can run without logging because-again-the data source will still be available if the instance should fail before the load completes. These commands can be executed without the need for redo log generation:

http://www.informit.com/content/0789716534/element_005.shtml (12 of 20) [26.05.2000 16:47:03]

informit.com -- Your Brain is Hungry. InformIT - Managing Your Database Space From: Using Oracle8
q q q q q q q q

INSERT, where data is being selected from another source CREATE TABLE...AS SELECT CREATE INDEX ALTER INDEX...REBUILD ALTER INDEX...REBUILD PARTITION ALTER INDEX...SPLIT PARTITION ALTER TABLE...SPLIT PARTITION ALTER TABLE...MOVE PARTITION

You can set the whole tablespace to a non-logging mode if your tablespace is going to contain many segments that you'll typically want to manipulate with these commands (and not generate redo entries). You must do this before you build the segments, however, because a segment will acquire only the tablespace's logging mode at the time it's created. You set the default logging mode for the tablespace with the ALTER TABLESPACE command, using the LOGGING or NOLOGGING option. When set, each new segment you create can accept this default behavior, or you can override it with the appropriate logging clause in the CREATE command. Moving Data Files There are generally two reasons to move a data file: q You're restoring a backed-up copy of the file following a disk failure and need to place the file on a different device or in a different directory structure than the original. q You've determined from monitoring database read/write performance that you have contention on certain disks. To solve this, you may need to move one or more data files to different disks. After you move a data file, you need to let the database know that the file has moved. You do this with a RENAME option of either the ALTER DATABASE or the ALTER TABLESPACE command. Generally, you use the former when the database is in a NOMOUNT mode, and you are in the process of recovering from media failure; you use the latter when you've completed a planned file move. In the latter case, you need to take the tablespace offline before physically moving the file and issuing the ALTER TABLESPACE...RENAME 'new_filename' TO 'old_filename' command. You can rename more than one data file in a single statement as long as they all belong to the same tablespace. Use a comma-separated list of filenames on each side of the TO keyword, ensuring that there's a one-to-one match between the names. For example, the following command will move three files from the /d1 device to three different devices: Oracle won't perform operating system file commands It's important to remember that renaming a file is a two-step process. Oracle doesn't physically move or rename the file at the operating system level; you are responsible for making this change yourself before issuing the ALTER TABLESPACE ...RENAME command. ALTER TABLESPACE prod_tables RENAME '/d1/prod02.dbf', '/d1/prod03.dbf', '/d1/prod04.dbf' TO '/d2/prod02.dbf', '/d3/prod03.dbf', '/d4/prod04.dbf' Coalescing Free Space Manually When there are multiple adjacent extents of free space in a tablespace, it can take longer for a new extent that spans these free extents to be created. If you monitor DBA_FREE_SPACE and notice that such free extents exist, you can manually coalesce them into one large free extent. You can issue the ALTER TABLESPACE...COALESCE command to combine the contiguous free extents in the tablespace on demand. Automatic free-space coalescing If you don't coalesce contiguous free space extents yourself, it will automatically be done for you by the background process SMON. The ALTER TABLESPACE... COALESCE option is provided because SMON may not work soon enough to be useful.
http://www.informit.com/content/0789716534/element_005.shtml (13 of 20) [26.05.2000 16:47:03]

informit.com -- Your Brain is Hungry. InformIT - Managing Your Database Space From: Using Oracle8

Avoiding Free Space Fragmentation One way to avoid having free space extents of various sizes is to prevent anyone from creating segments in the tablespace without your supervision. You can then ensure that they use extents of the same size for every segment. If this isn't an option, you can help minimize the problem by setting a "model" size for extents in the tablespace. This model size represents the smallest extent allowed and also controls the size of larger extents by ensuring that they are all integer multiples of the model size. If you decide to set such a model value for your tablespace, use the ALTER TABLESPACE...MINIMUM EXTENT command, providing an integer for the size, in bytes, of the smallest allowable extent. When MINIMUM EXTENT is set, every new extent added to the tablespace will be exactly the requested size, rounded up to the next Oracle block, or an integer multiple of that number of blocks. This sizing will override the tablespace's default storage clause, if necessary, as well as the storage options of the segment itself. Even manual extent allocations using such commands as ALTER TABLE...ALLOCATE EXTENT (SIZE...) will be controlled by the value MINIMUM EXTENT. Managing Query-Only Tables To avoid having to make backups of data files that contain non-changing data, you can define a tablespace and, consequently, its data files as read-only. Similar to putting a tablespace into backup mode, this freezes the related data files' header blocks. However, because there can be no changes to them, Oracle knows that these data files are current copies, no matter how long ago they were modified as read-only. Consequently, you can take a backup of such files and restore them, following media failure, at any time in the future without them needing any recovery information from the redo log files. Backup guidelines for read-only tablespaces You should back up the files in the tablespace as soon as possible every time you make a tablespace read-only; an earlier backup will still need to have redo applied to ensure that all changes before the change in status have been applied. Following a change to read/write again, you can still restore from the backup taken while it was read-only, provided that you have the redo generated following its change back to a read/write status. If you later need to make changes to one or more tables in a read-only tablespace, you have to make the tablespace accessible for writes again. You use the commands ALTER TABLESPACE...READ ONLY and ALTER TABLESPACE...READ WRITE to make these changes. Storage for Temporary Segments Temporary segments-used to complete sorts too large for the memory allocated to them-are ephemeral objects. They're created when needed and dropped when their work is done. In some databases-particularly query-intensive ones-the overhead of creating and dropping temporary segments can cause a significant performance problem. You can alter this default behavior by defining the tablespace where the temporary segments are stored to contain only this type of segment. Now, rather than drop a temporary segment when its work is finished, Oracle will preserve it for use by another sort in the future. If you didn't create the tablespace with this characteristic, you can issue the ALTER TABLESPACE...TEMPORARY to convert it to contain non-disappearing temporary segments. If the tablespace should happen to contain another type of segment, such as a table or index, you can't make this change. You can convert the tablespace if you need to add non-temporary segments or change the storage characteristics of the temporary segments in TEMPORARY tablespace. In this case you use the keyword PERMANENT in the ALTER TABLESPACE command. Any existing temporary segments will be dropped as they are when following their default behavior, and you'll be able to add any other type of required segment to the tablespace. You'll have to drop any of these segments ahead of time to reconvert the tablespace to TEMPORARY. Closing your database releases all temporary segments Temporary segment space isn't held over database shutdowns and restarts. Even the temporary segments stored in TEMPORARY-type tablespaces will have disappeared when you reopen a closed database. Modifying Default Storage Values The command ALTER TABLESPACE DEFAULT STORAGE allows you to change the default values assigned to the storage characteristics of segments created in the tablespace without their own, overriding, STORAGE
http://www.informit.com/content/0789716534/element_005.shtml (14 of 20) [26.05.2000 16:47:04]

informit.com -- Your Brain is Hungry. InformIT - Managing Your Database Space From: Using Oracle8

clauses. You need to take care when issuing this command for a couple of reasons: q This command affects only segments created after this change is made; it doesn't affect any existing segments. Specifically, if you have a table created with a default MAXEXTENTS value of 20, that table can contain only 20 extents-even if you change the tablespace default MAXEXTENTS to 40, 50, or even UNLIMITED. To change the storage characteristics of any existing segments, you have to alter each one individually; whenever a segment is created, the storage parameters for that object are stored in the data dictionary as part of that object's definition. Changing a tablespace's DEFAULT STORAGE changes only the tablespace definition, not the definitions of the objects within it. q If you've defined your tablespaces to contain extents of the same size, changing any of the INITIAL, NEXT, or PCTINCREASE default values causes any new object to build extents of sizes different from those of any existing segments. Therefore, unless you're prepared to deal with the possible fragmentation caused by different-sized extents, you shouldn't modify these particular values in anything other than an empty tablespace. Adding and Resizing Data Files You'll occasionally have to add space to an existing tablespace. This may be a planned or an unplanned occurrence: q Planned expansion is usually the result of an anticipated database growth over time in a system where the full complement of disk drives to support the growth wasn't available at database-creation time. It can also be the result of adding to an application more functionality that requires more rows or columns to be added to a table. q Unplanned expansion occurs when a segment, such as a table or an index, grows much larger than was anticipated in the database-design phase. This may be due to poor analysis or to a sudden change in the environment, such as an unanticipated doubling of orders for a specific product. For planned expansion, particularly those involving the addition of new disks, adding more data files is the best method for adding space to a tablespace. This allows you to add exactly the amount of space you need and to place it onto different disks from the existing files, thus avoiding possible disk contention. File-system files and raw partitions can be added by using the ALTER TABLESPACE...ADD DATAFILE command. As with the CREATE TABLESPACE command, you can add one or many files with the same statement. The file name(s) and size(s) specifications are just the same as in the CREATE TABLESPACE command discussed earlier in this chapter. You can also use additional data files, as just discussed, for an unplanned expansion. In such cases, you may not be able to place the files on new, unused disk drives; you may have to find whatever space is available in the disk farm for the time being. Also, if you need to use raw partitions, you'll have to be able to create them yourself or have the system administrator build them for you-unless you already have spares available. An alternative for an unplanned expansion is to let the data files grow themselves. This has to be done when you first add them to the tablespace, using the AUTOEXTEND clause with the CREATE or ALTER TABLESPACE commands' file specification. If you didn't set this option when you added the files to the tablespace, you can still increase the file's size by extending it manually. This command is ALTER DATABASE DATAFILE...RESIZE. (Notice that this is ALTER DATABASE, not ALTER TABLESPACE.) The RESIZE clause takes a single argument, indicating the number of bytes that you want the file to contain following successful execution of the command. This can either be a simple integer or an integer followed by K or M for kilobytes or megabytes, respectively. Shrinking oversized data files You can use ALTER DATABASE DATAFILE ...RESIZE to shrink, as well as to increase, the size of a data file. You can't reduce a file, however, unless there's empty space at the end of the file sufficient to remove the number of bytes needed to reach your desired size. The RESIZE option can't remove empty space from the middle of a file, and it won't remove blocks now assigned to a database object. The ALTER DATABASE DATAFILE...RESIZE command will manipulate only the space requested. It won't cause the file to expand, or shrink, automatically in the future.

Dropping Tablespaces
Although not a common requirement, you may need to drop a tablespace. There are a few reasons you might need to do this: q You no longer need the segments it contains for any further processing.

http://www.informit.com/content/0789716534/element_005.shtml (15 of 20) [26.05.2000 16:47:04]

informit.com -- Your Brain is Hungry. InformIT - Managing Your Database Space From: Using Oracle8
q

There's enough corruption or damage to the contents of the tablespace that you want to rebuild it from scratch. You've moved its contents to another tablespace.

In order to drop a tablespace, it must not contain any rollback segments being used by an active transaction. If it contains any segments at all, you must use the INCLUDING CONTENTS option to force Oracle to drop these segments along with the tablespace. Dropping online tablespaces isn't recommended Although you can drop a table-space while it's still online, I advise you to take it offline first. This will avoid interference with ongoing transactions that are using the contents of the table-space and save you from drop-ping a segment that's really still being used. This is the DROP TABLESPACE command's full syntax: DROP TABLESPACE tablespace_name [INCLUDING CONTENTS] [CASCADE CONTRAINTS] You'll need the CASCADE CONSTRAINTS option if the tablespace contains tables being dropped with the INCLUDING CONTENTS option and these tables are the parents, via referential integrity constraints, of tables in another tablespace. SEE ALSO For a complete discussion of database ARCHIVELOG modes, To learn more about temporary segments, To learn about referential integrity constraints and the concepts of parent/child tables,

Extent Allocation
After you build your tablespaces, you or your users will use them to store various types of segments. Some of these will almost certainly be added by you and some will be automatically created by the Oracle kernel. The others may be created by you or by the users, but their maintenance and space management may still be under your control in either case. Part of the work involved in managing segment space allocation should be completed during the physical design of your database because it's related to the number and arrangement of your tablespaces. This topic is discussed earlier, in the section "Space Management Fundamentals"; the discussion that follows here assumes that you've already decided what type of segment is being placed where and concentrates on how new extents are added to these segments after they're created, or how unneeded space can be retrieved from segments to which it has already been allocated.

Comparing Dynamic and Manual Extent Allocation


Every type of Oracle segment (except the bootstrap segment, which is fixed in size at database-creation time) can grow automatically by default. The following sections examine how each segment type can have its growth controlled and how best to manage any required growth. Temporary Segments We begin this discussion with temporary segments because in many ways these are the easiest to manage-you have so little control over them. Temporary segments are created as needed by Oracle while it's processing SQL statements on behalf of a user's process. Temporary space is generally required by a process if performing a sort operation, although some types of joins and related activities also use temporary segments. This temporary disk space is used only when the SQL statement has insufficient memory in which to complete its processing. All extent allocation to temporary segments is dynamic. In other words, it occurs automatically, without any specific commands from users. This is true whether a new temporary segment is being created or new extents are being added because the original size was insufficient to complete the task. Due to this completely automatic behavior, users in no way can give explicit instructions about the sizing or the number of extents in a temporary segment. Oracle always uses the storage information defined in the DEFAULT STORAGE clause of the tablespace for its temporary segments. In Chapter 6 "Managing Redo Logs, Rollback Segments, and Temporary Segments," you can find guidelines on how to determine good storage values for temporary segments.

http://www.informit.com/content/0789716534/element_005.shtml (16 of 20) [26.05.2000 16:47:04]

informit.com -- Your Brain is Hungry. InformIT - Managing Your Database Space From: Using Oracle8

By default, as soon as the operation that required the disk space is finished, the temporary segment is dropped and the blocks used by its extents are returned to tablespaces as free blocks. It's this behavior that gave temporary segments their name; they only acquire space for a short time and then return it. Another option you should consider for your temporary tablespaces is creating or converting them to the TEMPORARY type. This will prevent Oracle from dropping temporary segments in the tablespace following the completion of the related SQL statements. Instead, the extents used in the segment are tracked in the data dictionary and made available to any server process that needs temporary space. New extents are added to the segments only if all the current extents are now in use by one or more users. Closing your database releases all temporary segments Temporary segment space isn't held over database shutdowns and restarts. Even the temporary segments stored in TEMPORARY-type tablespaces will have disappeared when you reopen a closed database. By not forcing users to create and recreate temporary segments each time they're needed, their work can be completed much faster. In fact, you can save your users a lot of time when you first create a TEMPORARY-type tablespace by prebuilding all the extents the tablespace can hold. You can do this by performing a massive sort (if you have a table or set of tables large enough to join), or by running large sorts in a number of concurrent sessions. Make sure that the userid you use for these sorts is allotted to the temporary tablespace you're planning to populate. Another benefit to using TEMPORARY-type tablespaces for your temporary segments is that Oracle enforces the use of same-size extents. All extents in such tablespaces are built based on the value of the NEXT parameter in the DEFAULT STORAGE clause. Rollback Segments As the DBA, you should create and manage rollback segments. You initially create a rollback segment with two or more extents and with extent sizes taken from the tablespace default values or from CREATE TABLESPACE's STORAGE clause. The behavior of the extents allocated to rollback segments is of interest here. SEE ALSO You can find detailed information about creating rollback segments on Rollback segments store information that would be needed if a transaction were to roll back. Every part of a single transaction must be stored in the same rollback segment, and many transactions can share the same segment. In most cases, transactions generate the same amount of rollback information, so when a rollback segment reaches a certain size, its space is sufficient to support all the needed concurrent transactions. As these transactions complete, the space they were using is recycled and made available to new transactions. However, if the database gets very busy or suddenly needs to support one or more very long-running transactions, a rollback segment may need to grow by adding one or more extents. As with temporary segments, this allocation is dynamic; users have no control over it. Rollback segments have one unique characteristic of space management not possessed by any other type of segment: They can shrink in size by dropping unnecessary extents. Suppose a rollback segment grew by adding extents in response to an unusual combination of concurrent long-running transactions. If before this it could handle its work load without additional space, it should be able to do so again without the need for additional space. If it's sharing a tablespace with other rollback segments, this space might be better used by one of the others, maybe also for a sudden increase in work. You can cause a rollback segment to return to this preferred size whenever it exceeds it by setting an OPTIMAL parameter value. OPTIMAL is a special parameter Whereas all the other storage parameters for a rollback segment can be inherited from the tablespace definition, OPTIMAL must be set with the STORAGE clause of the CREATE ROLLBACK SEGMENT or the ALTER ROLLBACK SEGMENT command. Data and Index Segments Segments designated to store table or index data can be created by you or by userids responsible for the applications that will use them. These segments can inherit all their tablespace's storage characteristics, just some of them, or none of them. When created, their storage characteristics can be changed for the most part; only the INITIAL and MINEXTENTS values are fixed for the life of the segment. If a data or index segment runs out of space in its current extents, one of two things can occur: A new extent

http://www.informit.com/content/0789716534/element_005.shtml (17 of 20) [26.05.2000 16:47:04]

informit.com -- Your Brain is Hungry. InformIT - Managing Your Database Space From: Using Oracle8

will be added by Oracle dynamically, or the SQL statement that required the extra space will fail. There are a number of reasons dynamic allocation could fail: q There may be insufficient space in the tablespace, and none of the data files can autoextend. q There may be space in the tablespace, but there isn't a sufficiently large extent of free space to hold the required extent. q The segment may already contain the MAXEXTENTS number of extents. Adding space to allow a new extent to be created automatically when the failed SQL statement is re-executed was discussed earlier in "Adding and Resizing Data Files." This would address the two first causes of dynamic space allocation failure. Another option for handling the second problem is to change the value of the segment's NEXT storage option, causing it to create an extent that fits into a remaining free extent. A third option would be to allocate the extent manually. You use the ALTER TABLE ALLOCATE EXTENT clause to do this. The complete syntax is as follows:

Identifies which freelist group will manage blocks in extent (used for databases that use Oracle Parallel Server option) Sets extent size, regardless of table's storage values Identifies into which data file extent will be placed This command has one additional benefit over changing the NEXT value. If you want, you can execute it a number of times, each time choosing a different size for the extent and a different data file into which it goes. This will allow you to prebuild extents that precisely fit the available free space until you have sufficient space allocated; this allows work on the segment to continue while a more permanent solution, such as additional disk space, is found. You can take advantage of manual extent allocation with the ALLOCATE EXTENT option for reasons other than overcoming space limitations. For example, you may want to build a segment in a tablespace with many data files so that you guarantee that blocks from each data file will be used by the segment. To do this, you can create the table or index with a single extent and then use the DBA_EXTENTS data dictionary view to find out which data file contains this extent. Then, by successive use of the ALTER TABLE...ALLOCATE EXTENT command, you can place an additional extent into each data file belonging to the tablespace. A final note on manual extent allocation If you don't provide a size when manually allocating an extent, the extent will be sized as though it were created dynamically. If you use the SIZE clause, however, it won't override the dynamic sizing that would have occurred. If a table were going to build its next dynamic extent with 1,000 blocks and you manually add an extent of just 50 blocks, the next dynamically allocated extent would still acquire 1,000 blocks.

Releasing Unused Space


Occasionally you'll build a segment far larger than you need it to be-perhaps because the initial estimates made during the analysis and design phase were wrong, or because the nature of the application changed. If you need to regain the unused space, you have a variety of options. First, when dealing with the case of rollback segments, you can use the ALTER ROLLBACK SEGMENT command to change the value of OPTIMAL. As long as you don't try to shrink it to a size less than its original size (that is, size of extent 1 + size of extent 2 + ... + size of extent MINEXTENTS), the rollback segment will, as it's used in future transactions, return to this size. As long as it doesn't consistently run out of space, it will attempt to maintain this size even if it temporarily grows beyond it. For a one-time fix, you do have the option of issuing the following command: SHRINK can cause free extents of unequal size
http://www.informit.com/content/0789716534/element_005.shtml (18 of 20) [26.05.2000 16:47:04]

informit.com -- Your Brain is Hungry. InformIT - Managing Your Database Space From: Using Oracle8

This command will remove partial extents. Therefore, even if you've carefully built your tablespaces and segments to have equal-sized extents, the end result of this command can be an extent of a size smaller than planned and a piece of free space larger than the expected extent size. ALTER ROLLBACK SEGMENT...SHRINK [TO integer [K|M]] For table, cluster, and index segments, you can remove unused space with the ALTER command's DEALLOCATE UNUSED clause. This command removes any unused extents and blocks as long as the original extents, set with the MINEXTENTS value in the CREATE command, aren't involved. You can even remove some empty space with the optional KEEP clause. This will save some allocated space to allow for some future growth without further extent allocation. A second option to remove excessive space from a table-one that will preserve extent sizes-is to move the data into a temporary table, drop all the extents (other than the original ones) from the table, and then move the rows back into it. The following code shows a session that performs exactly these actions on the UNFILLED_ORDERS table. The key commands are the CREATE TABLE...AS SELECT and TRUNCATE commands. CREATE TABLE temp AS SELECT * FROM unfilled_orders / TRUNCATE TABLE unfilled_orders / INSERT INTO unfilled_orders SELECT * FROM temp / DROP TABLE temp / To drop unused space from an index, you can simply use the ALTER INDEX command's REBUILD option. The only restriction you need to be concerned with when using this command is that the original and the replacement index copies will temporarily have to exist at the same time. This means that you'll need space for the new version of the index to be built in the target tablespace, which may not be the same as the current one.

Defragmenting Free Space


If you have a database that for whatever reason doesn't follow the guidelines discussed in this chapter about maintaining equal-sized extents in a tablespace, you may find yourself with a badly fragmented tablespace. This tablespace contains a lot of free space, but each piece, or extent, of free space is only a few blocks big, too small to be usefully added to any of the segments in the tablespace. For tablespaces of type TEMPORARY, this will only occur should the default storage settings have been changed before the tablespace is completely filled. To fix it, you can alter the tablespace to be a PERMANENT tablespace again. This will cause all the temporary segments to be freed over time and new ones built in their place. When all the odd-sized extents have been removed, you can convert the tablespace back to type TEMPORARY and allow the new segments to grow back to the necessary sizes by using fixed-sized extents. For rollback segment tablespaces, your best option is to temporarily provide additional rollback segments to give you a chance to drop and recreate the current rollback segments, choosing appropriate storage values in your CREATE command. If you already have rollback segments available in another tablespace, you may be able to make the changes without adding temporary ones. If you plan to do this, try to drop and recreate the problem rollback segments during a period of low use. That way users aren't held up because an insufficient number of rollback segments are available to support them. The most difficult fragmentation problems to fix are those associated with tables and indexes. If you can afford to drop all the indexes in the tablespace temporarily and then rebuild them, this is the easiest way to solve the problem. However, if the tablespace contains tables or other types of segments besides the indexes, you need to deal with the larger problem. Similarly, if the indexes can't all be dropped, you don't have a simple method to solve the issue. Need to defragment a tablespace?

http://www.informit.com/content/0789716534/element_005.shtml (19 of 20) [26.05.2000 16:47:04]

informit.com -- Your Brain is Hungry. InformIT - Managing Your Database Space From: Using Oracle8

If you have to defragment a table-space, I strongly recommend that you reconsider your tablespace usage before reloading anything. You may want to change all the segment storage clauses so that they all have equal-sized extents, or you may want to add more tablespaces to meet the design suggestions offered earlier in this chapter. As your database grows in size, the inconvenience to you and to your users will increase should you need to perform future defragmentation processing. Fragmented tablespaces containing tables, or tables and other types of objects, are very difficult to handle. Some third-party tools are available. Without them, you're going to use a tool to store the contents of the tablespace in some type of temporary storage, drop the tablespace contents, and then restore the original contents by using new segments with appropriate sizes. Oracle offers the Export and Import utilities to help you do this. You can also build your own tools to unload data, table definitions, and the like, and then use a combination of SQL, SQL script files, and SQL*Loader to reload the tablespace. < Back Contents Next >

Save to MyInformIT

Contact Us | Copyright, Terms & Conditions | Privacy Policy | Advertising Copyright 2000 InformIT. All rights reserved.

http://www.informit.com/content/0789716534/element_005.shtml (20 of 20) [26.05.2000 16:47:04]

informit.com -- Your Brain is Hungry. InformIT - Managing Redo ... Rollback Segments, and Temporary Segments From: Using Oracle8

You are here : Home : Using Oracle8

Exact Phrase All Words Search Tips

Managing Redo Logs, Rollback Segments, and Temporary Segments


< Back Contents Next > Save to MyInformIT From: Using Oracle8 Author: David Austin Publisher: Que More Information

q q

Database Management Structures Managing Redo Log Files


r r r r r

Sizing Your Redo Logs Determining the Number of Redo Log Groups Determining the Number of Redo Log Members Adding Redo to Your Database Dropping Redo Logs and Handling Problem Logs Determining the Number of Rollback Segments Sizing Your Rollback Segments Adding Rollback Segments Creating a Rollback Segment PUBLIC Versus PRIVATE Rollback Segments Altering Rollback Segments Dropping and Shrinking Rollback Segments Sizing Your Temporary Tablespaces Setting Storage Options for Your Temporary Tablespaces Managing Your Temporary Segments

Managing Rollback Segments


r r r r r r r

Working with Temporary Segments


r r r

Database Management Structures


To most database users, tables are the important elements, with indexes being a second component they might consider as useful. In an Oracle environment, however, some structures are essential to the efficiency and integrity of the database. You need to know how to build and manage these components for your database to run smoothly and to allow your users to be able to work with their tables in an orderly manner. Oracle8 can be forgiving, but don't count on it

http://www.informit.com/content/0789716534/element_006.shtml (1 of 18) [26.05.2000 16:47:20]

informit.com -- Your Brain is Hungry. InformIT - Managing Redo ... Rollback Segments, and Temporary Segments From: Using Oracle8

Although you can run your database without paying attention to your redo logs and temporary segments, you'll almost certainly pay a performance penalty by ignoring them. In most cases, these penalties will be severe and may cause your data-base to fail to meet your users' requirements. Rollback segments are a little more intrusive, at least if you pay any attention to Chapter 5's recommendations to use multiple tablespaces in your database, because you'll find that the default database structures won't support DBA or user-created segments in new tablespaces. The three structures you look at in this chapter are redo log files, rollback segments, and temporary segments. If you've already read Chapter 5 "Managing Your Database Space," you'll already be aware of some characteristics of these last two. You have been exposed to the concept of redo log files if you've read Chapter 2 "Creating a Database." Although the purpose of these structures will be touched on in this chapter, the emphasis in the following sections is to help you design, build, and manage them to the benefit of your database.

Managing Redo Log Files


One of the slowest operations performed in a relational database is the process of transferring data between disk storage (where it's maintained for long-term safety) and memory (where it needs to reside to be accessed and modified by database users). Although providing you with a number of parameters to tune memory and help avoid unnecessary disk reads, Oracle has its own mechanism-known as deferred writes-to reduce the overhead of unnecessary disk writes. Basically, it means that Oracle blocks that have been modified by user processes while sitting in memory will be written back to disk when they can be written most efficiently. This approach takes into account how recently the last change was made, how many blocks can be written with a single write operation, and how much of the space occupied by these blocks is needed for further disk reads. One key issue to understand about deferred writes is that blocks aren't moved back to disk just because the changes on them have been committed by a user, and that blocks containing uncommitted changes are just as likely to be written to disk as blocks with committed changes. In other words, the act of issuing a COMMIT doesn't override the deferred write considerations. This, in turn, leads to a situation where neither the contents of memory nor the contents of the data files represent a coherent picture of the database transactions. With such a design, it's essential that Oracle provide a mechanism to restore the database to a consistent state should the contents of memory be lost. Of course, disk loss also needs to be protected against, but you can use a number of techniques, discussed in the chapters in Part V, "Backing Up Your Oracle Database," to help ensure that such a loss can be recovered. There's no method to "back up" the contents of memory in case it should suffer a failure. The importance of a COMMIT When users issue the COMMIT command and receive an acknowledgment that the command is processed, the changes to the transaction are considered "permanent." This doesn't mean that they can't be changed in the future, but that the assigned values should be available to all other users of the database until they're changed again. This requires that the DBMS guarantees that committed data won't be lost by any fault of the database itself (database vendors can't control the quality of the media on which their databases are stored). Oracle has to employ a method-redo logs-to ensure that the contents of unwritten blocks containing committed data can be restored should unexpected memory loss occur. Understand that Oracle can lose its memory area for many reasons, not just the loss or corruption of memory due to a chip failure, but also due to an unexpected operating system failure, loss of a key Oracle process that prevents further processing (and, hence, further access to memory), or a sudden shutdown of the Oracle instance without normal shutdown activity (a SHUTDOWN ABORT). Anytime Oracle stops operating without being able to complete a requested shutdown command, it's said to have suffered "instance failure." In any such case, there's no opportunity to copy block images from memory down to disk nor to reload blocks that contain uncommitted changes. Instance failure will require that, when the instance is restarted, an instance recovery be performed to reapply all changes associated with committed transactions and to remove all uncommitted changes. Implied SHUTDOWN ABORT The server manager command STARTUP FORCE is used to cycle an instance-that is, it stops the current instance and starts a new one. However, it performs an implicit SHUTDOWN ABORT command to stop the running instance. Consequently, this command will cause an apparent instance failure, requiring an instance recovery as part of the requested startup process. You shouldn't use this command to restart an instance quickly, but only when you would normally need to use a SHUTDOWN ABORT.

http://www.informit.com/content/0789716534/element_006.shtml (2 of 18) [26.05.2000 16:47:20]

informit.com -- Your Brain is Hungry. InformIT - Managing Redo ... Rollback Segments, and Temporary Segments From: Using Oracle8

The redo log files contain the information needed to complete an instance recovery. They will allow the recovery operation to re-execute every command that produced part, or all, of a committed database change, whether the affected database blocks were copied back to disk before the memory failure or not. Similarly, they contain enough information to roll back any block changes that were written to disk but not committed before the memory loss. Without delving into the details of what really goes into a redo log and how the recovery process works-information that's explained fully in the Oracle8 Server Administrator's Guide and the Oracle8 Server Concepts manuals provided as part of the Oracle documentation-the following sections explain what you need to consider when building your database and preparing it for your users and applications.

Sizing Your Redo Logs


If you make a mistake in the arithmetic in your check register, you can go back to the last time you balanced your account with a bank statement and redo the steps to get a valid current balance. But if you don't know when you last had a valid balance, you can't. Similarly, for Oracle to apply redo information following an instance failure, it must be able to find a starting point at which the status of all data and transactions was known before the failure. This starting point is known as a checkpoint. Very simply, whenever a checkpoint occurs, Oracle forces every changed block now in memory to be written to disk to ensure that the redo records associated with those changes are stored in the redo log. When these steps are complete, a special checkpoint marker record is placed into the redo log. If the instance fails before the next checkpoint completes, the recovery operation knows that the log file and the data files were synchronized at the previous checkpoint. Any changes made to a data block since that checkpoint was completed are recorded in the redo log entries written after its checkpoint marker. Therefore, recovery can begin at the most recent checkpoint marker record. With this checkpoint mechanism in place, entries older than the most recent checkpoint aren't needed. To conserve disk space, Oracle will reuse the redo logs. When all the logs are filled up, the LGWR process simply starts to write over the first redo log file again. Figure 6.1 shows how this circular use of the redo logs occurs. For this to be successful, Oracle has to ensure that a checkpoint has been taken before reusing a redo, and this is achieved as part of the processing that occurs at a log switch. At this time, LGWR will automatically begin writing new redo information into the next available log file and will also initiate a checkpoint. Therefore, your redo log size has a direct bearing on the frequency of these default checkpoints. Figure 6.1 : LGWR reuses the redo log files in a circular fashion, overwriting old records as it goes. You need to concern yourself with this because the amount of data written to a log file between checkpoints affects the length of the instance recovery process (the act of reapplying changes following an instance failure). If a log file holds a day's worth of transactions, it could easily take a day to reapply them after an instance failure. Assuming that it may take a few minutes to a few hours to research and fix whatever problem caused the instance failure, the total recovery time in such a case could be more than a day. Most businesses can't afford to have a key database unavailable for that long, unless it's a planned period of downtime during which other plans have been made to carry on with essential processing. Instance failure is, by definition, an unplanned event. Of course, if your database is being used primarily for queries (which, by their nature, don't change the contents of the blocks being read), your redo log could be quite small and still not fill up in a 24-hour period. Only when new data is added, or old data is changed or removed, will redo entries be written to the log, and this is very infrequent in read-intensive databases. Time to apply redo during instance recovery If you're running Oracle8 on a multi-CPU platform, it's likely that your users are creating transactions and the related redo log records in parallel. When your database needs instance recovery, only one process-the background process SMON-will be reading the redo logs and reapplying the changes made by your users. This serial processing can take substantially longer, in elapsed time, than the transactions took when being processed by multiple users concurrently. Another factor to consider is that you can set parameters to cause additional checkpoints to be performed automatically as a redo log fills up. The LOG_CHECKPOINT_INTERVAL parameter sets the number of blocks that will be written to a redo log file before a checkpoint is forced to occur, whether or not the file is full. If a redo log file is 1,000 blocks in size, setting LOG_CHECKPOINT_INTERVAL to 250 will cause a checkpoint when it's one-quarter, one-half, and three-quarters full. Although such a setting will reduce your instance recovery time for this redo log, it may be inappropriate should you decide to reduce your redo log size, for

http://www.informit.com/content/0789716534/element_006.shtml (3 of 18) [26.05.2000 16:47:20]

informit.com -- Your Brain is Hungry. InformIT - Managing Redo ... Rollback Segments, and Temporary Segments From: Using Oracle8

example, to expedite archiving (as discussed a little later in this section), and then forget to reset this parameter. There are a number of considerations when setting checkpoint frequency, as the following two examples demonstrate: q A customer proposed using an Oracle database to track the airborne spread of environmental pollutants during emergencies. During crises, it would be imperative to ensure that the most current information was readily available to emergency response teams. The information was to be loaded from a number of tracking stations in real-time. In case of an instance failure, the customer wanted to restore operations as quickly as possible to continue to collect the most current data and to make it available to users in the field. To minimize downtime during instance recovery, it was recommended that checkpoints be taken every few seconds. Although not necessary for most users, a number of customers that need up-to-the-second real-time data have also instituted checkpoints many times a minute. q On the other extreme, some commercial customers who experience heavy volumes of transactions for a few hours a day, such as bank counter service operations, have elected to avoid checkpoints during these hours by building large redo logs that can handle a full day's business without switching-and hence without causing any checkpoints. They force a checkpoint every day immediately before the work period to ensure that they have a complete log file available for that day's load. Ideally, you don't want checkpoints to occur more than once every 15 to 20 minutes, and much less frequently if the database is mainly processing queries. The problem with starting checkpoints too frequently is that a number of very active blocks will still be in use between checkpoints. However, they will have to be written out at each checkpoint. Redundant writes waste the disk I/O bandwidth. You may want to experiment with log file sizes after you finish reading about other considerations, such as archive logging (discussed in a little bit), to come close to the ideal size. Parallel server processes might speed up instance recovery You can set the RECOVERY _PARALLELISM parameter in your initialization file to an integer value higher than 1 to allow SMON to enlist that number of parallel server processes for recovery. You must also start this number of processes by using the PARALLEL_MIN _SERVERS parameter. These processes will apply the redo in parallel during instance recovery. You may not see significant improvement in recovery time, however, because the parallel server processes must still apply the redo in sequential order, so they're likely to be contending for disk read access to the redo logs as well as contending for space in the database buffer cache. Before leaving checkpoints, you should be aware of one other factor: To perform instance recovery, a checkpoint marker must be available to indicate the start point. If you have two log files and, for whatever reason the checkpoint following a log switch doesn't complete until the second log fills up, the only checkpoint marker is in the first log file. If Oracle began to write redo records into this first log file again, there's no guarantee that this remaining checkpoint wouldn't be overwritten, leaving no starting point for an instance recovery. Consequently, Oracle will stop writing further redo entries until the checkpoint process completes and the new marker record can be written. If no redo log entries can be written, Oracle can't preserve the integrity of database changes because block images in memory can't be guaranteed to be recoverable. So, rather than let unrecoverable changes occur, Oracle stops any further transaction processing. To the users, the database will appear completely frozen. Of course, after the checkpoint completes, work will continue as normal. You may have to size your redo logs large enough to avoid this problem because a database that freezes out user activity isn't going to meet performance standards. A second mechanism that may affect your redo log file size decision is whether you're going to archive your log files. Although this is another topic that belongs in the backup and recovery discussions in Chapter 12, "Understanding Oracle8 Backup Options," we'll take a quick look at it here. Normally, when a redo log is filled and another one is being written to, the contents of the first log are of no use following the completion of the checkpoint started at the log switch. When the other log file fills up, Oracle can safely begin writing over the contents in the first one. Similarly, because a new checkpoint will be under way, the data in the second log file will soon become unnecessary for instance recovery, so Oracle can switch back to it when the current log fills. Now consider data file backups. They can't be made continuously, so the restoration of a backed-up data file will almost certainly cause old block images to be placed back into the database. Transactions completed since the backup was made won't be represented. However, if you could keep every redo entry made since the data file backup was made, restoring the blocks in that data file would require nothing different than restoring them following an instance failure.

http://www.informit.com/content/0789716534/element_006.shtml (4 of 18) [26.05.2000 16:47:21]

informit.com -- Your Brain is Hungry. InformIT - Managing Redo ... Rollback Segments, and Temporary Segments From: Using Oracle8

Oracle offers the capability to "archive" your online redo log files so that they can be preserved for this very purpose. As each redo log fills up, Oracle still switches to the next one and starts a checkpoint, but also marks the completed redo log file for archiving. Either you or a special background process, ARCH, will copy the redo log to a special location where it can be saved for as long as needed. Users and log file switches when archiving Be sure to understand the pros and cons of archiving before deciding whether to use it, and then be prepared to monitor your system for problems with postponed log switches until archived copies are made. Not only does processing for current users come to a halt during such times, but also new users attempting to connect to the data-base will be prevented from doing so. This can make the problem very visible to your user community. Even if your users don't let you know, the alter log for your data-base will show you when you have log-switching problems due to tardy archiving. When you place your database into the mode that requires completed log files to be saved to an archive location, Oracle becomes very adamant that this work be performed. In fact, it won't let the redo activity switch back into a log file until that file has been safely archived. So, if your files are very big and take too long to archive (particularly if they're being copied to a slow disk drive) or so small that they fill up faster than they can be copied, you can run into problems. If the logs can't be switched because the archiving isn't done, no more log records can be written. Only when the archive is complete can work continue again. During the time that Oracle is waiting for the archive to finish, your users are experiencing the same situation when checkpoint completion was delayed. To them, the database is stuck and they can't get any work done. You may therefore have to adjust your log file size to ensure that the archiving process will complete sooner than the next switch.

Determining the Number of Redo Log Groups


Besides changing the size of your redo log files to avoid problems with checkpoints and archives not completing in time, you can add the amount of redo that's written before a log file is needed again by adding more log files. If it takes half an hour to fill each log file, and you have two such files, it will take an hour before you'll fill both logs and need to reuse the space in one of them (refer to Figure 6.1). If this isn't always enough time to complete your checkpoints and archives, you have a serious performance problem and should turn to Chapter 20 to find out about redo log tuning. However, until you can resolve the problem, you could increase the length of time before a log file is reused by adding one more file. Generally, databases with smaller log files and high DML activity will tend to have peak periods during which their log files fill faster than checkpoints or archives complete. By adding more log files, you increase the time taken to fill up the entire set of logs. This allows time for the checkpoint and archive work to catch up from the overloads during the peak periods. A suggested option for sizing online redo logs One approach to redo I've seen work successfully is to keep as much online redo log as can be filled between database back-ups. This way, should you have to restore a data file from a backup, the redo needed to recover it up to the point of failure will be in the online redo log. The recovery operations can access the online redo logs more directly than they can archived logs, so the recovery time will be reduced. You need to find a good balance between the size of your log files, which affects default checkpoint intervals (and hence instance recovery times), and the number of your log files, which with the size determines how long each checkpoint and archive has to complete. In general, you're better off having too much online (as opposed to archived, or offline) redo log rather than too little. Too little will likely cause at least an occasional pause while a checkpoint or an archive completes; too much will simply waste disk space. Oracle requires you to have a minimum of two redo log files. You can have up to 255, unless your operating system sets a lower maximum number. When you create a database, you can reduce the maximum number of redo log files you will be allowed to create by setting the optional parameter value, MAXLOGFILES, of the CREATE DATABASE command. You can even set an initialization parameter, LOG_FILES, to limit the number of log files that an instance can access.

Determining the Number of Redo Log Members


Each log file is given a log group number, either by you as you add them or automatically by Oracle. We refer
http://www.informit.com/content/0789716534/element_006.shtml (5 of 18) [26.05.2000 16:47:21]

informit.com -- Your Brain is Hungry. InformIT - Managing Redo ... Rollback Segments, and Temporary Segments From: Using Oracle8

to log files with a group number because a group can contain more than one file. Each member of a group will be maintained by Oracle to ensure that it contains the same redo entries. This is done to avoid making a redo log a single point of failure. When your log groups contain only one member, you risk having the database become unusable if you lose a redo file. Recall from the earlier section, "Sizing Your Redo Logs," that at least one checkpoint completion marker must be available somewhere in your redo logs. If only one such marker happened to be in a set of logs and the file containing the marker was on a disk that crashed, you would no longer have a way of performing instance recovery. This jeopardizes your database and so Oracle, on detecting a missing log file, will stop processing any more transactions and perform a shutdown. Oracle mirrors versus operating-system mirrors for redo logs There has been much discussion within Oracle and with Oracle's business partners about the pros and cons of using Oracle's multiplexing versus using a mirrored disk controlled by the operating system. The biggest benefit to Oracle mirrors is that they work on any operating system and on any disks; the biggest disadvantage is that Oracle insists that each available copy is written to before it considers a flush of the redo buffer complete. This synchronous write process can be slower than operating-system mirrors. However, as disk subsystems become faster and add intelligent buffering capability, the latter difference becomes less of an issue. My best advice at this time is to use Oracle mirroring if you have no other option, and to experiment with Oracle and operating system-mirroring if you can. If each log file is paired with a copy of itself and that copy is on a different disk, a single disk failure won't reduce the database to an unusable state. Even if the only checkpoint record was in the file on a crashed disk, its copy would still contain a valid version of it. Oracle will know to avoid the bad disk for future writes and for any further archiving activity. The term Oracle uses for copied sets of redo logs is multiplexing. You're strongly encouraged, therefore, to multiplex every log group with at least two members. Depending on the criticality of your systems, you may want even more. Rarely do you need to go beyond three members per group; in fact, with more than that, you're likely to experience performance problems due to the time it takes to write out of the copies of each redo block. If you can mirror your log files at the operating-system level, you can also use mirroring to guard against a single disk loss. If you rely on operating-system mirroring alone, you still run the risk of having Oracle shut itself down if you lose a disk. System mirrors aren't visible to Oracle, so it may think it has lost its only copy of a log file if the primary disk crashes. System mirroring is a good way to create three- or four-way mirroring, however. Create each Oracle log group with two members, and then mirror either one or both members.

Adding Redo to Your Database


When you created your database, you created at least two redo log groups, the minimum number required to start an Oracle database. You may have created more than that, and you could have created each group with one or more members. This section looks at the commands used to add more log groups or more log members to an existing group. You'll see that the syntax is very similar to the log definition portion of the CREATE DATABASE command. One option you can use when creating a redo log group allows you to identify a thread number. The thread number is useful only in a parallel server (multi-instance) database, so we won't examine its usage here. For further details on this option, see the Oracle8 SQL Reference Manual and the Oracle8 Parallel Server Administration manual. Syntax conventions used in this book Throughout this book, square brackets in command syntax indicate optional clauses and an ellipsis ([...]) indicates a clause that can repeat. Another convention used is the | character, which indicates that you choose between one item or the other, not both (for example, choose either K or M). When you actually use commands, don't include the brackets, ellipses, or | character. The general syntax for creating a log group with a single member is as follows: ALTER DATABASE [database_name] ADD LOGFILE [GROUP [group_number]] filename [SIZE size_integer [K|M]] [REUSE] This code is for a multimember group:

http://www.informit.com/content/0789716534/element_006.shtml (6 of 18) [26.05.2000 16:47:21]

informit.com -- Your Brain is Hungry. InformIT - Managing Redo ... Rollback Segments, and Temporary Segments From: Using Oracle8

ALTER DATABASE [database_name] ADD LOGFILE [GROUP [group_number]] (filename, filename [,...]) [SIZE size_integer [K|M]] [REUSE] The database name is optional if it's included in the parameter file (as the DB_NAME parameter) for the instance. Otherwise, you need to identify the name with which the database was created and which is stored in the control file. If you omit the group clause (the keyword GROUP and the group number), Oracle will assign the next available group number for you. Every group must have a unique number to identify it. The filename can be a file system filename (which should be fully qualified with a path name), a raw partition name, or a link. In the multimember case, you should put the filenames inside a pair of parentheses and separate the names with commas. You must include a SIZE or a REUSE clause. You can include both for file system files, as long as any existing file is the same size as the specification. For file system files, you must provide a size if the file doesn't already exist, and you must include the REUSE keyword if the file does exist; the command will fail if either condition is violated. For raw partitions, the REUSE keyword is meaningless because the new contents will always be written over the contents of the partition; therefore, it makes no difference whether you include it. You must include the file size, however, to avoid using the whole partition-two blocks of space must be reserved in each partition for operating-system information-or possibly writing beyond the partition boundaries. The K and M represent kilobytes and megabytes, respectively. Without either, the size_integer represents bytes. The SIZE and REUSE options in database redo log groups If you're creating a log group with multiple members, include the SIZE or REUSE keyword only once for all members of the group. They must all be the same size because they'll all contain the same data. This means-unless you're using raw devices-that if one file exists, they must all exist so that the REUSE option is valid for each named file. If some exist and some don't, you'll have to create the group with only those that exist (or only those that don't) and add the others as additional members. I show you how to do this a little later. No matter how you create them, all the files in a redo log group will have to be same size. Listing 6.1 shows a script file with three commands, each creating a new redo log group. Listing 6.1 Create new redo log groups 01: 02: 03: 04: 05: 06: 07: 08: 09: ALTER DATABASE ADD LOGFILE D:\ORANT\DATABASE\log10.ora SIZE 100K / ALTER DATABASE ADD LOGFILE GROUP 6 (E:\DATABASE\log6a.ora, F:\DATABASE\log6b.ora) SIZE 10M / ALTER DATABASE ADD LOGFILE GROUP 5 (E:\DATABASE\log5a.log, F:\DATABASE\log5b.log) REUSE /

Numbering of code lines Line numberings were included in Listing 6.1 and other code listings to make discussion about this code easier to reference. The numbers should not be included with any command-line commands, as part of any Oracle scripts, or within SQL statements. On line 1 of Listing 6.1, the first redo log group will be created with a single member in the group, and the group's number will be assigned by Oracle. Group 6 will have two members, and the group is assigned its group number in the command on line 4. In these first two commands, Oracle will create all new files. Redo log group 5, as created by the command on line 7, will contain two members, both of which will replace existing files. Adding one or more new members to an existing group can be done by identifying the group number (the simplest syntax) or by identifying the group with a list containing the full path names of all the current members. The syntax for the former when adding just one more member is ALTER DATABASE database_name ADD LOGFILE MEMBER

http://www.informit.com/content/0789716534/element_006.shtml (7 of 18) [26.05.2000 16:47:21]

informit.com -- Your Brain is Hungry. InformIT - Managing Redo ... Rollback Segments, and Temporary Segments From: Using Oracle8

filename [REUSE] TO GROUP group_number

Different numbers of members per log group Oracle doesn't require that you use the same number of log file members in each group. In fact, because you can add a new member or members to only one group at a time with the ALTER DATABASE command, you couldn't start mirroring your log files by adding a new member to each group unless they could exist with different numbers of members, at least temporarily. However, even though you could run your database with two members in one redo log, three in another, just one in a third, and so on, I don't recommend this practice. After you decide how many mirrored copies make sense for your requirements, you should use that number in all groups. This way, you won't experience periods of different performance or have to worry, should you lose a disk drive, whether you've lost a single-copy redo log or just one of a mirrored set. The database name is optional, as when adding a new group. The group number must refer to an existing group. The filename must be a fully qualified file system name, a raw partition, or a link. The REUSE keyword is needed only if you're using a file system file that already exists, in which case it must be the same size as other files in the group. A SIZE clause isn't needed because every member of the group must be the same size as the existing member(s). The syntax for using the existing filename(s) to add a single member is as follows: ALTER DATABASE database_name ADD LOGFILE MEMBER filename [REUSE] TO [filename] | [(filename, filename, (,...)] Everything is as described earlier except that for a group with a single member, the filename alone is used in place of the GROUP clause, whereas a comma-separated list of the existing member's filenames (enclosed in parentheses) is required if the group already has more than one member. In either case, the filenames must be fully specified. To add multiple members to a group withi in the same command, you simply change the filename clause to read as follows in either version of the statement: (filename, filename, (,...)) [REUSE] The use of REUSE is, as before, required if the files already exist.

Dropping Redo Logs and Handling Problem Logs


The most likely reason you would want to drop a redo log file is because you want to replace it with one of a different size. Once in a while you may drop a log file because you've determined you have more online redo than you need. This is rarely beneficial, however, unless it's the only log file on the disk, because you generally shouldn't share disks where you're writing online redo logs with other file types. You may also need to drop a redo log member when you're experiencing performance problems due to too many multiplexed copies, or because you want to replace one or more members with operating-system mirrored copies. To drop an entire log group, the following must be true: q At least two other log groups will be available after the group is dropped. q The group isn't currently in need of archiving. q The group isn't the current redo group (the one to which log entries are now being written). If these conditions are met, you can drop the entire log group with an ALTER DATABASE command that identifies the group. As with the ADD LOGFILE MEMBER option of this command (discussed in the preceding section), you identify the group with its group number, its member's filename, or with a list of the filenames of its current members: ALTER DATABASE database_name DROP LOGFILE GROUP group_number | filename |

http://www.informit.com/content/0789716534/element_006.shtml (8 of 18) [26.05.2000 16:47:21]

informit.com -- Your Brain is Hungry. InformIT - Managing Redo ... Rollback Segments, and Temporary Segments From: Using Oracle8

(filename, filename (,...)) The database name is needed only if the parameter file used to start the instance doesn't include the DB_NAME parameter and the ellipsis ([...]) shows a repeatable field. You can drop one or more members from an existing log group with the DROP LOGFILE MEMBER variant of this command. You can't drop all members with this command, however; you must use the preceding command to drop the group as a whole. The syntax for dropping a group member is ALTER DATABASE database_name DROP LOGFILE MEMBER filename where the database name has the same requirements as previously discussed, and the filename must be fully qualified, as with all files discussed in these sections. Once in a while, a redo log group may become damaged to the point where the database can't continue to function and you need to replace the redo group with a clean file or set of members. If the damaged log group isn't yet archived or the log group is one of only two log groups in the database, however, you aren't allowed to drop it. Creating a third log might not help because Oracle will continue to attempt to use the damaged log before moving on to the new one. In such cases, you need to simulate dropping and recreating the log with the CLEAR LOGFILE option of the ALTER DATABASE command. After you do this, you may need to perform a brand new backup of your database because there may be a break in the continuity of your archived logs, and you may have removed the only checkpoint record in the online redo. If you do have to perform an emergency replacement of an online redo log, use the following command: ALTER DATABASE database_name CLEAR [UNARCHIVED] LOGFILE group_identifier [UNRECOVERABLE DATAFILE] where database_name and group_identifier follow the same characteristics as described earlier for the DROP LOGFILE option. The UNARCHIVED clause is needed if the group was awaiting archiving before being cleared, and the UNRECOVERABLE DATAFILE option is required if the log would have been needed to recover an offline data file. To find out about the current status of your redo logs, you can query various dynamic performance views. The V$LOGFILE view will show the names of the members of each redo log group and their status. In this view, NULL is a normal status, INVALID indicates that the file is unavailable, DELETED shows that the file has been dropped, and STALE is used when a file is a new member of a group or doesn't contain a complete set of records for some reason. The V$LOG and V$THREAD provide more detailed status information and include records of the archive and system change numbers related to the redo files. Also, the view V$LOG_HISTORY is used mainly by parallel server databases for recovery operations. SEE ALSO How to set up redo log archiving for your database, Learn about tuning your redo logs for checkpoint and archive processing, More about the alert log and the types of messages it can provide, such as log file switches delayed by checkpoints or archiving,

Managing Rollback Segments


Rollback segments perform two basic functions, both somewhat related: q Allowing changes to be rolled back (as the name suggests). This activity restores block images to the state they were in before a change was made. The change could be a row INSERT, UPDATE, or DELETE, but could also be simply a change in the header portion of the block to reflect a change in the status of a transaction. q Providing read-consistent images of blocks. These are copies of blocks restored to look just as they did when a query or series of queries began. Known as "undo blocks," they allow a query to read blocks being changed by other users, preventing it from seeing uncommitted changes (dirty reads) or changes that occur while the query is executing (inconsistent reads). To perform its function, a rollback segment stores a before image of a column, row, or other block element before the change is applied to the block. By using the address of this changed data, also stored with the before image, a rollback or read-consistency operation can overlay the changed information with this record of what it

http://www.informit.com/content/0789716534/element_006.shtml (9 of 18) [26.05.2000 16:47:21]

informit.com -- Your Brain is Hungry. InformIT - Managing Redo ... Rollback Segments, and Temporary Segments From: Using Oracle8

looked like before the change. Dirty reads A "dirty read" is a query that returns value from a row that's part of an as-yet uncommitted transaction. If the transaction is subsequently rolled back, the query has returned a value that's never really been stored in the database. An inconsistent read occurs when a query reads some blocks before a transaction changes them and other blocks after the same transaction changes those. During a rollback operation, the before image data is applied directly to the data block image where the transaction had made its changes. Rollbacks can occur for a number of reasons, including, but not limited to the following: q The user or application issuing a ROLLBACK command q A single statement failing after making some changes q A transaction failing because the user is unexpectedly disconnected from the database q An instance being recovered following an instance crash; the transactions incomplete at the time of the failure are rolled back as part of the instance recovery mechanism In some cases, particularly the latter, the blocks that need rollback information applied may be stored only on disk rather than in memory. When a read-consistent block image is needed, Oracle first copies the block into a different memory location inside the database buffer cache. The original block image can continue to be manipulated by any active transactions that need to modify it. Oracle then applies the rollback information to the copy of the block, called the "undo block." In some cases, a long-running query may encounter a block that has been changed by multiple transactions subsequent to the start of the query. In such a case, the undo block will be further modified by applying the before images from each transaction until the undo block resembles how the original block looked when the query began. The query will then read the undo block as opposed to the "real" block. Rather than allowing rollback segments to grow indefinitely, Oracle reuses the blocks that contain before images of completed transactions. Over time, the entire rollback segment is recycled many, many times as new transactions find space for their rollback entries. This reuse of space is controlled rather than haphazard, however. For the read-consistent feature to work, the before images needed by a query need to be available for the whole duration of the query. If a new transaction simply reused any available rollback block, it could be the one needed by an executing query. To help avoid this, the space is used in a circular fashion. The oldest before images are overwritten first. To simplify the code to support this activity, a couple of rules are applied to rollback segments: q Only one extent is considered to be the active extent. When a new transaction needs to store a before image, it's assigned to a block within the active extent. As soon as the active extent fills up, the next extent is made the active extent. Transactions that run out of space in their assigned block will be given a second block in the active extent or, if none are available, will be assigned a block in the next extent, making it the active extent. q When an extent fills up, if the next extent still contains at least one block with before images from a still-active transaction, that extent isn't used. Instead, Oracle builds a brand new extent and makes it the active extent. In this way, all the blocks in the extent with the active transaction are left available for queries that might need their contents to build undo blocks. This behavior is shown in Figure 6.2. Figure 6.2 : Oracle uses rollback segment extents in a circular fashion unless they're all busy, in which case it builds a new one. By cycling through the extents or building new ones when necessary, a block in, say, extent 1 won't be overwritten until all the blocks in all the other extents have been reused. This allows before images to remain available for the longest time possible, given the current size of the rollback segment. Preserving the before images for queries is important because, if a query needs a before image that's not available, the query can't continue. Without the before image, the query can't reconstruct the block in question to look as it did at the query start time and it terminates with an error message: ORA-1555 - Snapshot too old. The message ORA-1555 - Snapshot too old is usually a warning that at least one of your rollback segments is too small to hold enough records to provide read consistency. If it occurs very infrequently, however, it may simply indicate that a report, or other query-intensive program, ran into a busy period of transaction processing that it usually avoids. If rerunning the problem program succeeds, you may not need to change your rollback segment sizes for this infrequent occurrence.

http://www.informit.com/content/0789716534/element_006.shtml (10 of 18) [26.05.2000 16:47:21]

informit.com -- Your Brain is Hungry. InformIT - Managing Redo ... Rollback Segments, and Temporary Segments From: Using Oracle8

The ORA-1555 error message One cause of the ORA-1555 problem needs to be solved by the application developer rather than by a change in the rollback segment sizing. The error occurs if a program is making changes to many rows in a table by using an explicit cursor data-either in PL/SQL or a 3GL language with Oracle precompiled code-to read through the data and additional cursors to make changes to the required rows. If these individual row changes are committed, the query cursor needs to build read-consistent images of the affected blocks. While this may not involve much rollback information itself, it does require the query to find the transaction entry information in the header blocks of the roll-back segments involved. It's the sheer number of transactions, not their size, that causes ORA-1555 errors in this type of program. The following sections discuss characteristics of transaction rollback and read consistency that you need to consider when determining the size and number of your database's rollback segments.

Determining the Number of Rollback Segments


A rollback segment uses the first block in its first extent to build a list of information about the transactions assigned to it. This list can contain only a fixed number of entries because the block size is itself fixed. Therefore, if your database needs to support lots of concurrent transactions, you should consider adding more rollback segments. Although Oracle will try to balance the workload between the rollback segments as new transactions begin, if every rollback segment is now supporting its maximum number of transactions, new transactions will be forced to wait until others complete. The SYSTEM rollback segment When a database is created, the default rollback segment is created in the SYSTEM tablespace, and it takes the default storage parameters of the tablespace. You cannot drop this rollback segment. It is used by Oracle for recursive SQL. For performance reasons, you really shouldn't allow each rollback segment to support its maximum number of transactions. If you do, you'll overwork the header blocks of the rollback segments. Not only do new transactions have to place their entries on these blocks, but the status of ongoing transactions has to be recorded, including information about the extents they're actively using. Finally, when a transaction completes, the status of the transaction-including a system change number for committed transactions-has to be recorded in the transaction's slot in the header block. There's no absolute rule as to the average number of transactions you should strive for per rollback segment. In most cases, the best number is between 4 and 10. For longer-running transactions that consume more rollback space, you should use a lower number in this range. Similarly, for shorter transactions, a higher number is a better target. You can get an idea about what your transaction mix looks like by examining the following dynamic performance tables. When you know the total number of concurrent transactions, divide this number by the appropriate value between 4 and 10, based on transaction size, to find a good number of rollback segments to try. You can always adjust this number later. SEE ALSO Determining if you have too many or too few rollback segments by examining contention statistics, Additional information on rollback segment performance,

Sizing Your Rollback Segments


After you determine how many rollback segments you need, you need to figure out how large they should be. For most databases, there is a perfect rollback segment size for the normal workload. At this size, rollback segments will contain sufficient space to hold the rollback data needed for all the concurrently active transactions at any point in time. However, they achieve this without being oversized, which would waste space. Once in a while you may have a special job or program that needs an extra-large rollback segment. If so, you can build one such rollback segment and leave it available at all times, or else leave it offline until needed. Assigning transactions to specific rollback segments

http://www.informit.com/content/0789716534/element_006.shtml (11 of 18) [26.05.2000 16:47:21]

informit.com -- Your Brain is Hungry. InformIT - Managing Redo ... Rollback Segments, and Temporary Segments From: Using Oracle8

To ensure that a transaction uses a specific rollback segment, you can take other rollback segments offline, or you can explicitly assign the rollback segment with a SET TRANSACTION USE ROLLBACK SEGMENT rollback_segment_ name command. If you have concurrent transactions or the segment is needed by an application that runs outside your control, explicit assignment is better. The SET TRANSACTION command must be executed before every transaction if the same rollback segment is needed for each one. You generally don't have to worry about making your rollback segments too small because, like other segments, they can grow automatically as more space is needed. This growth does depend on whether the segment has reached the maximum number of extents you've defined for it and on the amount of room remaining in the tablespace where it's stored. See the following section, "Adding Rollback Segments," for details on how to set the extent maximums and tablespace allocation. I don't recommend letting Oracle take care of rollback segment growth for you for a couple of reasons: q Any such dynamic growth will slow down the process that incurs the overhead of finding the required free space and allocating it to the segment. q Studies performed at Oracle have shown that rollback segments perform best when they have between 10 and 20 extents. If you rely on Oracle to add extents as needed, you may have segments well outside these ideal limits. Another problem with automatic growth is that, once in a while, something will occur that will make it grow far larger than is typically necessary. One example I have encountered was a program that, following a minor change, got itself into a processing loop that caused it repeatedly to update the same few records without committing the changes. As a result, the rollback segment handling the transaction kept growing until its tablespace ran completely out of space. At that point, the transaction failed. When that happened, the space in the rollback segment taken up by the runaway transaction entries was freed up for use by subsequent transactions-but the rollback segment was now almost the size of the tablespace. When a different transaction, assigned to another rollback segment, needed more space for its entries, it failed because its rollback segment had insufficient room to grow. By using the OPTIMAL entry in the STORAGE clause of the CREATE or ALTER ROLLBACK SEGMENT command, you can hone in on the best size for your rollback segments. The OPTIMAL value will cause the rollback to perform a special check when it fills up its current active extent. If the sum of the sizes of the current extents is greater than OPTIMAL, rather than just look to see if the next extent is available to be the active extent, the server checks on the one after that, too. If this one is also available, the server will drop the next extent rather than make it current. Now, if the total rollback segment is at its optimal size, the current extent becomes the one following the dropped extent. But if the total size is still greater than OPTIMAL, the extent following this one is checked for availability and the same process is repeated. Eventually, the rollback segment will be reduced to optimal size by the deletion of extents, and the next remaining extent will become the current extent. Rollback segment extents are dropped in a specific order The extents are dropped in the same order that they would have been reused. This activity results in the oldest rollback entries being dropped, preserving the most recent ones for use by ongoing queries. You can query the dynamic performance table V$ROLLSTAT to determine how many times a rollback segment has grown through the addition of new extents (the EXTENDS column value), and how many times it has shrunk (the SHRINKS column value). If these numbers are low, or zero, the rollback segment is either sized correctly or it may still be larger than needed. You can adjust the value of OPTIMAL downward and check the statistics again later. If they're still low, or zero, your extent may still be oversized. However, if they have started increasing, it means the rollback segment needs to be larger. If the grow-and-shrink counts are high when you first look at the table, the rollback segment has always been too small. Problems to look for when decreasing rollback segment size

http://www.informit.com/content/0789716534/element_006.shtml (12 of 18) [26.05.2000 16:47:21]

informit.com -- Your Brain is Hungry. InformIT - Managing Redo ... Rollback Segments, and Temporary Segments From: Using Oracle8

When reducing the size of a roll-back segment, you need to monitor your users' queries to ensure that the number of Snapshot too old messages doesn't increase. Remember that queries may need rollback information long after a transaction is finished. You can't just size your rollback segments to make them as small as the transaction load requires if this will interfere with standard query processing. Even a report program that runs once a month may require you to maintain larger roll-back segments than your records of rollback segment growth and shrinkage would indicate are needed. If the program fails only once or twice a year, the cost of rerunning it may not be as expensive as the cost of the extra disk space needed to support larger rollback segments. But if it fails almost every month, you may need to increase your roll-back segment sizes. Due to the automatic nature of the extent additions and deletions, you don't have to recreate a rollback segment that's the wrong size-you can control it with the OPTIMAL value once you find its stable size. As mentioned earlier, however, a rollback segment performs optimally when it has between 10 and 20 extents. This number provides the best balance between the need for transactions to find available space and the availability of required rollback entries for queries needing read-consistent data. Of course, based on the discussion of space management in Chapter 5 we're talking about rollback segments where all the extents are the same size. If your rollback segment's ideal size corresponds to this preferred number of extents, you can leave it as now defined. If the number of extents is below 10 or much above 20, however, you should consider dropping it and re-creating it with around 15 equal-sized extents, such that its total space remains the same.

Adding Rollback Segments


If you've just created a new database, there's only one rollback segment, SYSTEM, and it won't support transactions on segments outside the SYSTEM tablespace. From the discussions in Chapter 5 you should be building all your application tables and indexes in alternative tablespaces, and so you'll need additional rollback segments to support transactions against them. If you have a running database, based on statistical results from your monitoring and tuning activity, you may determine that you need to add one more rollback segment to the existing set. In either case, you follow exactly the same steps. Add rollback segments 1. Create the rollback segment. 2. Bring the rollback segment online. 3. Alter the parameter file to bring it online automatically whenever the instance starts.

Creating a Rollback Segment


Listing 6.2 shows the syntax of the command to create a rollback segment. Listing 6.2 Create a rollback segment 01: 02: 03: 04: 05: 06: 07: CREATE [PUBLIC] ROLLBACK SEGMENT segment_name [TABLESPACE tablespace_name] [STORAGE ( [INITIAL integer [K|M]] [NEXT integer [K|M]] [MINEXTENTS integer] [MAXEXTENTS integer] [OPTIMAL NULL|integer[K|M]] ) ]

On line 1, PUBLIC causes the rollback segment to be public rather than private. (This distinction is discussed in the next section, "PUBLIC versus PRIVATE Rollback Segments.") segment_name is a valid Oracle name. On line 3, INITIAL is the size of the first extent, in bytes (default) or in K kilobytes or M megabytes. NEXT on line 4 is the size of the second and subsequent extents, in bytes (default) or in K kilobytes or M megabytes. Line 5 shows MINEXTENTS, which is the number of extents (minimum two) included in the rollback segment at creation time and the number of extents that must always belong to the segment. MAXEXTENTS on line 6 is the largest number of extents the segment can acquire. Although MAXEXTENTS can
http://www.informit.com/content/0789716534/element_006.shtml (13 of 18) [26.05.2000 16:47:21]

informit.com -- Your Brain is Hungry. InformIT - Managing Redo ... Rollback Segments, and Temporary Segments From: Using Oracle8

be set to the value UNLIMITED, this isn't recommended for rollback segments. If you've sized your rollback segments correctly, they shouldn't need to grow much larger than this; unlimited growth would result from erroneous processing. Such processing could fill up the available space, restricting the growth of other rollback segments performing valid work, and would take as long to roll back, when it finally ran out of space, as it did to build all the rollback entries in the first place. Until the rollback of this transaction is completed-which could conceivably take many, many hours, if not days-the space consumed by the rollback entries can't be freed. No PCTINCREASE option for rollback segments Every extent, other than the first, must be the same size. With a non-NULL value for OPTIMAL, any extent with no active transactions assigned to it can be dropped if, by so doing, the total segment size will still be greater than the OPTIMAL size. The initial extent is never dropped, however, because it maintains the transaction table in its header block. Also, only the extents that have been inactive the longest are dropped. If there are four inactive extents but an active one between the third and fourth of these, only the first three will be dropped. This is to avoid removing records that might be needed for read-consistent queries. On line 7 is OPTIMAL, which determines how the rollback segment can shrink. A value of NULL prevents the rollback segment from shrinking automatically; a size (in bytes, kilobytes, or megabytes) causes the rollback segment to shrink automatically by dropping inactive segments. OPTIMAL must be set to a value no smaller than the sum of bytes in the first MINEXTENTS. This can be computed from the formula OPTIMAL >= INITIAL + (NEXT * (MINEXTENTS - 1))

PUBLIC Versus PRIVATE Rollback Segments


The main reason Oracle supports public rollback segments is to help DBAs who manage multiple Oracle instances running against a database with Oracle Parallel Server. The management of rollback segments on some hardware platforms running Parallel Server is almost impossible without them being public, mainly because you don't have to name the rollback segment in a parameter file to make it come online when an instance starts up. Instead, each instance takes one or more segments from the pool of public rollback segments as its own. Although you can use public rollback segments in a non-parallel database, you're encouraged to use private rollback segments. By naming the rollback segments you want to make active in an instance in your parameter file, you have full control over which ones are active. While you can name public rollback segments in the parameter file, two other parameters-TRANSACTIONS and TRANSACTIONS_PER_ROLLBACK_SEGMENT-can also bring additional ones online, if available. By using private rollback segments, you're guaranteed that only those named in the parameter will be brought online automatically at instance startup. How Oracle activates public roll-back segments for an instance Oracle evaluates the quotient of TRANSACTIONS and TRANSACTIONS_PER_ ROLLBACK_SEGMENT when it starts up. If these parameters were set to 210 and 25, respectively, the result of this calculation would be 8.4. Oracle rounds up this value to the next integer and attempts to acquire this many rollback segments for the instance. First, it counts the number assigned by the ROLLBACK_SEGMENTS parameter and then calculates how many more are needed. To continue the example, sup-pose that five rollback segments were named in ROLLBACK_SEGMENTS; 9 minus 5-or 4 more-would be needed. If at least four public rollback segments aren't yet assigned to an instance, the current instance will take four of these and bring them online for itself. If there are fewer than four, it will bring as many online as are available. If no public rollback segments are available, the instance will continue to run with just the five named in the parameter file. You can take private and public rollback segments offline and return them to online status while the instance is running, as discussed in the following section.

Altering Rollback Segments


Before working with a rollback segment, you may need to determine its current status. This can be a little confusing due to the number of different data dictionary tables you might need to examine to get the full picture. Table 6.1 shows the various types of rollback segment status and characteristics, along with the data dictionary table and column that contain this information.

http://www.informit.com/content/0789716534/element_006.shtml (14 of 18) [26.05.2000 16:47:21]

informit.com -- Your Brain is Hungry. InformIT - Managing Redo ... Rollback Segments, and Temporary Segments From: Using Oracle8

Table 6.1 Identifying the status of a rollback segment Status ONLINE OFFLINE PENDING OFFLINE DEFERRED PRIVATE PUBLIC Data Dictionary Table DBA_ROLLBACK_SEGS V$ROLLSTAT DBA_ROLLBACK_SEGS V$ROLLSTAT V$ROLLSTAT DBA_SEGMENTS DBA_ROLLBACK_SEGS DBA_ROLLBACK_SEGS Table Column STATUS STATUS STATUS STATUS STATUS SEGMENT_TYPE OWNER (= SYS) OWNER (= PUBLIC)

In an online state, the rollback segment is available for use and may have active transactions running against it. In an offline state, the rollback segment is idle and has no active transactions. A pending offline state is a transition state between being online and being offline. When you alter an online rollback segment to be offline, it won't accept any more transactions, but will continue to process any current transactions until they complete. Until these are all committed or rolled back, the rollback segment remains in the offline pending state. A deferred rollback segment holds rollback information for transactions that can't complete because the tablespace to which they need to write has gone offline. These transactions will have failed due to the loss of the tablespace, but they can't be rolled back because the blocks in the offline tablespace can't be read or written. To be able to complete the necessary rollbacks when the tablespace comes back online, the associated rollback entries are stored in the SYSTEM tablespace in deferred rollback segments. Although not truly a status, Table 6.1 also includes an entry for the PRIVATE and PUBLIC rollback segment descriptions so that you know how to identify which is which. As you can see, this is shown indirectly in the OWNER column of the DBA_ROLLBACK_SEGS table, where an entry of SYS indicates that it's a private rollback segment and an entry of PUBLIC shows it to be a public rollback segment. The ALTER ROLLBACK SEGMENT command lets you change the status of a rollback segment manually. The full syntax for this command is ALTER ROLLBACK SEGMENT segment_name [ONLINE|OFFLINE] [SHRINK [TO integer [K|M]]] [STORAGE (storage_clause)] The keywords ONLINE and OFFLINE simply take the rollback segment between the basic states. As discussed earlier, a rollback segment may not go completely offline immediately; it may have to wait until pending transactions complete. If you're taking a rollback segment offline in preparation for dropping it, you may need to wait until it's completely offline, as shown in V$ROLLSTAT. You're not allowed to take the SYSTEM rollback segment offline for any reason. The SHRINK keyword causes the rollback segment to shrink to its optimal size, or to the size provided when you execute the ALTER ROLLBACK SEGMENT command. As with automatic shrinkage, you can't reduce the size to less than the space taken by MINEXTENTS. The storage clause of the ALTER ROLLBACK SEGMENT command is identical to its counterpart in the CREATE ROLLBACK SEGMENT statement, with the following provisos: q You can't change the value of INITIAL or MINEXTENTS. q You can't set MAXEXTENTS to a value lower than the current number of extents. q You can't set MAXEXTENTS to UNLIMITED if any existing extent has fewer than four blocks (and you're advised not to use this value anyway, for the reasons discussed earlier).

Dropping and Shrinking Rollback Segments


At times, you may want to change the physical structure of a rollback segment, such as alter its basic extent size or move it to another tablespace. You may even want to remove a rollback segment because you no longer think you need it. If you have a rollback segment that has grown much larger than the optimal size and want it reduced to the optimal size as soon as possible, you can shrink it yourself. To remove a rollback segment, you must first take it offline, as discussed in the previous section. Recall that even if the command to take it offline works, you may not be able to remove it. Until all the transactions now assigned to the segment complete, the status won't be permanently altered to offline, so you won't be able to drop it. Of course, after you change its status, no further transactions will be assigned to the rollback segment.

http://www.informit.com/content/0789716534/element_006.shtml (15 of 18) [26.05.2000 16:47:21]

informit.com -- Your Brain is Hungry. InformIT - Managing Redo ... Rollback Segments, and Temporary Segments From: Using Oracle8

As soon as a rollback segment is completely offline-meaning that the status in DBA_ROLLBACK_SEGS and V$ROLLSTAT is OFFLINE-you can remove it. To do this, issue the command DROP ROLLBACK SEGMENT segment_name If you need to reduce a rollback segment to its optimal size, you can just wait until this occurs automatically. However, if the segment is taking up space that might be needed by other segments, you can manually cause the segment to shrink by executing the command ALTER ROLLBACK SEGMENT segment_name SHRINK [TO integer[K|M]] As soon as you do this, the rollback segment will shrink. If it doesn't shrink to the desired size as specified in the command, or to OPTIMAL if you didn't specify a size, you may need to re-issue the command later. Some extents may still contain active transactions and so can't be dropped. There's also a chance that your command and the SMON background process were both trying to shrink the rollback segment concurrently. To do this, they must both store some rollback information in the segment themselves, and so may be interfering with the extents that each of them are trying to drop.

Working with Temporary Segments


A temporary segment is a database object that stores information for a server process that can't fit all the data it needs into memory during a sort, a hash join, or other related activities. If it resides in a regular tablespace-that is, one of type PERMANENT-the segment will be available to the server for as long as it's needed. At the end of this time, the segment will be dropped and the blocks it was occupying will be returned to the tablespace for use by another segment. Any number of server processes can be using temporary segments at one time, each managing its own segment and space allocation. If a temporary segment exists in a tablespace defined as a TEMPORARY type, it's not dropped when the initial server process is done with it. Instead, a list of its extents is maintained in the data dictionary, and any other process needing temporary space is allocated one or more of its extents. As with other segments, if there's insufficient space to meet the demand at any given time, more extents are added automatically. The new extents are added to the list of available extents when the processes using them are done. In PERMANENT or TEMPORARY tablespaces, temporary segments obtain their storage information from the default storage defined for the tablespace. The user has no opportunity to set extent sizes, maximum extent counts, and so on. Chapter 5 covers the details of setting up tablespaces with default storage sizes.

Sizing Your Temporary Tablespaces


Temporary tablespaces are difficult to plan for when you first build a database. You don't know just how many SQL statements that might need to use the space are likely to be executing concurrently. You probably won't know how much temporary space any one of them will need. A rule of thumb you might consider is to make the tablespace about half as big as the size of your largest table. This will work against the smaller tables to fit into the tablespace concurrently. However, this may not be sufficient space for work on the largest table to complete, particularly if other work is using the space simultaneously. As your database is used, you can examine performance statistics to see just how many times your applications need temporary segments. Look at the values in the V$SYSSTAT view for the row where the NAME column value is sorts (disk). The number in the VALUE column is the number of sorts since the instance startup that required space in a temporary segment. If you see that the frequency is high enough for multiple such sorts to be occurring simultaneously, you may want to add to your tablespace size. Disk sorts versus memory sorts If you query the V$SYSSTAT view with the statement SELECT * FROM v$sysstat WHERE name LIKE '%sorts%';, you see three rows of data: one for sorts done entirely in memory, one for sorts requiring use of disks (temporary segments), and one showing the total number of rows involved in both types of sorts. If the number of disk sorts is relatively high in comparison to the memory sorts, you may need to tune the memory area provided for sorting, as discussed in Chapter 18. The number of rows sorted shown in V$SYSSTAT may help you determine whether a few relatively large sorts are requiring the temporary segment space, or if a lot of smaller sorts are just spilling over from the sort memory area. Because the row count is accumulated across memory and disk sorts, however, it can be difficult
http://www.informit.com/content/0789716534/element_006.shtml (16 of 18) [26.05.2000 16:47:21]

informit.com -- Your Brain is Hungry. InformIT - Managing Redo ... Rollback Segments, and Temporary Segments From: Using Oracle8

to tell how many rows are associated with each type. Also, a sort involving lots of rows may not be as memory intensive as a sort of far fewer, but much longer, rows. To take advantage of this statistic, you may have to monitor V$SYSSTAT on a sort-by-sort basis, with some knowledge of the nature of each sort being recorded in this dynamic performance view. Of course, you may hear from your users if they run out of temporary space, because their applications will fail with an error if they can't acquire sufficient temporary space to complete. You should be careful, however, not to confuse some errors with lack of space in the temporary tablespace: q A temporary segment may reach its MAXEXTENTS limit and not be able to extend any further, even though the tablespace still has room. q Certain DML statements use temporary segments inside a standard tablespace to build the extents for a new or changed segment. If the DML statement can't find the required space, it fails with an error such as ORA-01652 unable to extend temp segment by number in tablespace name. Be sure to check that the named tablespace is really your temporary tablespace before you rush off and try to increase its size; it could be one of your user data tablespaces that's out of room.

Setting Storage Options for Your Temporary Tablespaces


As discussed earlier, temporary segments always build their extents based on the default storage values associated with their tablespace definitions. It's therefore critical that you build your temporary tablespaces with appropriate values in the default storage clause. To understand what storage values are appropriate for temporary segments, you should think about what's being placed into these segments. It's the data that's being swapped out of the space in memory reserved for the type of activity that might need temporary space-either sorts or hash joins. Generally, the space for sorts is the smaller of these two, and the hash join space should be an integer multiple of the sort space size. Therefore, it makes the most sense to build extents that can hold at least the full amount of data likely to be flushed from memory during a sort. If the extent is exactly the same size as the sort space, each set of data flushed to disk will need just one extent. If the extent is larger than the sort space but not a multiple of it, every other write or so from disk will probably need to write into two extents, which isn't as efficient as writing to a single extent. The problem is reversed after the sort when the data has to read back from disk. Some reads will need to skip from one extent to another. If the extent is an integer multiple of the sort size, each set of data flushed from memory will fit into some part of a single extent. However, the last set of data flushed may not fill the extent being used, and the additional space will be wasted until the subsequent processing completes and the entire extent is released. It's almost not worth making each extent in a temporary segment the same as the SORT_AREA_SIZE initialization parameter. If the sort requires more space than this parameter allocates, it will need to write at least two sets of data into its temporary segment-the first set that initially fills up the sort space, and the balance of the sorted data. If no balance were left over to sort, the temporary segment wouldn't have been needed. If the extent size had been double the sort space (2 * SORT_AREA_SIZE), only one extent would have been needed. Use large extent sizes for temporary segments For sorts that almost fit into memory, such as one that uses less than twice the current sort area size, you may well find yourself tuning memory to hold the sort data completely, leaving only the very large sorts in need of temporary space. Such large sorts may need to write a number of sort runs to disk, maybe in the tens or twenties. It makes sense, therefore, to make each extent sufficiently large to hold these multiple runs, thus avoiding the over-head of having to find the required extent space more than once. Your temporary extent sizes may well be 10 to 100 times as large as your SORT_AREA_SIZE, depending on your situation and database use. You may also want to set MAXEXTENTS based on the type of tablespace you're using. If it's of type PERMANENT, you should ensure that each concurrent disk sort can grow its temporary segment to a size that will let the other segments reach the same size. For example, in a 200MB tablespace where you expect five concurrent disk sorts, each temporary segment should be allowed to grow to 40MB. If your extent size is 2MB, MAXEXTENTS would need to be 20. If you're using a TEMPORARY type tablespace, there will probably only be one temporary segment. You can ensure this by building it yourself right after you create your temporary tablespace by using the method described in Chapter 5 You may not want to do this if you're using Oracle Parallel Server, but we'll ignore that case here. If you build the temporary segment yourself, you'll know exactly how many extents can be contained by the tablespace, so just set the tablespace default to that number. Of course, if your tablespace is built with
http://www.informit.com/content/0789716534/element_006.shtml (17 of 18) [26.05.2000 16:47:21]

informit.com -- Your Brain is Hungry. InformIT - Managing Redo ... Rollback Segments, and Temporary Segments From: Using Oracle8

files that can autoextend, you should make the MAXEXTENTS value larger. SEE ALSO Building tablespaces specifically for temporary segments,

Managing Your Temporary Segments


Due to their nature, you don't have much control-hence, little management responsibility-for your temporary segments. A couple of issues might affect you: you may run out of space in your temporary tablespace and you may decide to change the value of your SORT_AREA_SIZE parameter. As with any other tablespace, if you run out of space in your temporary tablespace, you can add a new data file or extend an existing data file, assuming that you have the necessary disk space. After you do this, you may want to look at the MAXEXTENTS value for the tablespace to see whether it can be set higher to take advantage of the additional space. Should you change the value of the sort space size by altering the parameter SORT_AREA_SIZE, your temporary extents may no longer align with the I/O to and from the sort space. If you're using a PERMANENT type of tablespace, just changing the default storage values for INITIAL and NEXT to the new value will be sufficient to ensure that the next temporary segments will be appropriately sized. If you have a TEMPORARY type tablespace, however, you'll need to drop the segment from it to free up the space to build one with new extent sizes. To drop the segment(s) from a TEMPORARY tablespace, you need to alter it to be a PERMANENT tablespace. This will cause its contents to be treated as any other temporary segment so they will be dropped automatically. Before a replacement segment is built, you should ensure that the default storage settings are changed to reflect the new extent size you require, as you would do for temporary segments in a PERMANENT tablespace. Therefore, before altering the tablespace back to TEMPORARY status, make sure that you alter the tablespace default storage settings. As when creating a new temporary tablespace of type TEMPORARY, you may want to prebuild the temporary segment to its maximum size as soon as the status is converted. The command to alter the tablespace type is Converting a tablespace to TEMPORARY requires that all other segments be removed from it Anytime a tablespace is of type PERMANENT, other types of segments in addition to temporary segments can be created in it. To switch it to a TEMPORARY type, all such segments must be removed. This is true whether the tablespace was originally created to hold different types of segments or if it was made PERMANENT for only a short time while the storage values were being changed. ALTER TABLESPACE tablespace_name PERMANENT|TEMPORARY SEE ALSO More information about setting up temporary tablespaces and their storage options, Initialization parameters associated with sorts, < Back Contents Next >

Save to MyInformIT

Contact Us | Copyright, Terms & Conditions | Privacy Policy | Advertising Copyright 2000 InformIT. All rights reserved.

http://www.informit.com/content/0789716534/element_006.shtml (18 of 18) [26.05.2000 16:47:21]

informit.com -- Your Brain is Hungry. InformIT - Adding Segments for Tables From: Using Oracle8

You are here : Home : Using Oracle8

Adding Segments for Tables


Exact Phrase All Words Search Tips

< Back

Contents

Next >

Save to MyInformIT

From: Using Oracle8 Author: David Austin Publisher: Que More Information

Table Structure
r r r r r

Choosing a Column Datatype and Length Character Data Numeric Data Date Data Binary Data

q q

Sizing and Locating Tables Setting Storage Parameters


r r r r r

INITIAL NEXT PCTINCREASE MINEXTENTS MAXEXTENTS Creating Tables for Updates Creating Tables with High Delete Activity Creating Tables for Multiple Concurrent Transactions

Setting Space Utilization Parameters


r r r

q q q q q

Building Tables from Existing Tables Monitoring Table Growth Managing Extent Allocation Removing Unused Space Using Views to Prebuild Queries
r r r r r r r r r

Changing Column Names with Views Dropping Columns with Views Hiding Data with Views Hiding Complicated Queries Accessing Remote Databases Transparently with Views Creating and Handling Invalid Views Dropping and Modifying Views Updating Data Through Views View Consistency

http://www.informit.com/content/0789716534/element_007.shtml (1 of 21) [26.05.2000 16:47:43]

informit.com -- Your Brain is Hungry. InformIT - Adding Segments for Tables From: Using Oracle8

Table Structure
Tables are the most common structures in almost all relational databases. They consist of rows (also known as tuples or instances in the worlds of relational theory and modeling, respectively) and columns (attributes). When queried by Oracle's SQL*Plus tool, they're displayed in a table format, with the column names becoming the headings. Such a display gives the illusion that the data is stored in the database just the way it appears onscreen: q The columns are lined up under column headings. q The data in each column is the same width from row to row. q Numeric data is aligned to the right of the field, other data to the left. q A fixed number of records fits on each page. q Each field is separated from its neighbors by a fixed number of bytes. A query against a table in SQL*Plus could result in the following output: CUST_NUMBER COMPANY_NAME PHONE_NUMBER LAST_ORDER ----------- ----------------------- ------------ ---------100 All Occasion Gifts 321-099-8642 12-MAR-96 103 Best of the Best 321-808-9753 05-MAY-98 110 Magnificent Mark's 322-771-3524 11-DEC-97 111 Halloween All Year 321-998-3623 25-FEB-98 ... Helping SQL*Plus format queries SQL*Plus may not always produce query output just the way you want to see it. For example, a column that's only one character wide will have a one-character column heading by default. Most users won't find the first character of the column name sufficient for identifying the column's contents. Similarly, some columns may be defined to hold many more characters than you need to see when casually querying the table. These longer columns can cause each row to wrap over multiple output lines, making it difficult to read the results. SQL*Plus provides formatting commands to help you produce output that has meaningful column names and column widths. You can even use its advanced features to create subtotals, grand totals, page titles and footers, and other standard reporting features. These are all covered in the Oracle8 SQL*Plus User's Guide. Although this formatting by SQL*Plus is very convenient and emphasizes the notion that relational databases store data in the form of two-dimensional tables, it doesn't really represent the true internal structure of the database tables. Inside Oracle's data files, the rows are stored very efficiently on Oracle database blocks, leaving very little free space (unless it's believed to be needed) and with little regard for how the data will look if displayed onscreen or in a report. The blocks themselves are stored in a data segment consisting of one or more extents. A very simple table will have a single extent, the first block containing its header information and the other blocks storing the rows themselves. A larger table may contain many extents, and a very large table may have additional header blocks to support the more complex structure. The data dictionary maintains the table definition, along with the storage information and other related object definitions, such as views, indexes, privileges, and constraints, some of which can be created with the table itself. (Views are covered at the end of this chapter, indexes in Chapter 8 "Adding Segments for Different Types of Indexes," privileges in Chapter 10, "Controlling User Access with Privileges," and constraints in Chapter 17, "Using Constraints to Improve Your Application Performance.") The simplest version of the CREATE TABLE command names the table and identifies a single column by name and type of data it will hold. Its syntax is as follows: CREATE TABLE table_name (column_name datatype); Another trick to create meaningful column names It's becoming more and more common to prefix the field names with letters that indicate which datatype is stored in the field-for example, dtmStartDate indicates a datatype for the field. You should, of course, choose a name for the table that's meaningful to you and your user community.
http://www.informit.com/content/0789716534/element_007.shtml (2 of 21) [26.05.2000 16:47:43]

informit.com -- Your Brain is Hungry. InformIT - Adding Segments for Tables From: Using Oracle8

Similarly, the column name should provide some useful information about the purpose of the data it will hold.

Choosing a Column Datatype and Length


Before building a table, you should know what type of information each column will need to hold. By knowing this, you can select an appropriate datatype and, possibly, a length. In some cases, you have a choice of datatypes that could be used for a particular column.

Character Data
You can store freeform character data in a number of formats. Table 7.1 shows the relateddatatypes and characteristics. Table 7.1 Definitions of Oracle datatypes Datatype: BFILE BLOB CHAR CLOB DATE LONG LONG RAW NCHAR NCLOB Length: 4GB 4GB 2,000 4GB 7 2GB 2GB 2,000 4GB Max Preferred Uses: Binary data stored outside the database, allowing fast byte stream reads and writes Variable-length binary objects Short fields or fields that need fixed-length character comparisons Variable-length, single-byte character fields exceeding 2GB Dates and times Variable-length character fields that exceed 4,000 bytes Variable-length, uninterpreted binary data Multibyte characters in short fields or fields that need fixed-length character comparisons Variable-length, multibyte character fields exceeding 2GB; support only one character width per field Number, having precision of 1 to 38 digits and scale of -84 to 127 Variable-length fields that store single or multibyte characters that don't need fixed-length comparisons Variable-length, uninterpreted binary data Extended row Ids Variable-length fields that don't need fixed-length comparisons Variable-length fields that don't need fixed-length comparisons Notes: 1 1 2 1 3 1,4 1,4 2,5 1

NUMBER NVARCHAR2

38 4,000

RAW ROWID VARCHAR VARCHAR2

2,000 10 4,000 4,000

4,6 6,7 6,8

1. There's no default length and no mechanism to define a maximum length. 2. Trailing blanks are stored in the database, possibly wasting space for variable-length data. Default length is 1 character; to provide a maximum field length, add the required length in parentheses following the datatype keyword. 3. Dates are always stored with seven components: century, year, month, day, hour, minute, and second. They can range from January 1, 4712 BC to December 31, 4712 AD. 4. Supported for Oracle7 compliance and may not continue to be supported; large objects (LOBs) are preferred. 5. The maximum length is the maximum number of bytes. For multibyte characters, the total number of characters will be less, depending on the number of bytes per character. 6. There's no default length. You must always supply your own maximum length value in parentheses following the datatype keyword. 7. Will stay compliant with ANSII standard definition for variable-length character fields.

http://www.informit.com/content/0789716534/element_007.shtml (3 of 21) [26.05.2000 16:47:43]

informit.com -- Your Brain is Hungry. InformIT - Adding Segments for Tables From: Using Oracle8

8. Will stay compliant with the definition from Oracle7. Internally, with the exception of the CHAR and DATE datatypes, Oracle stores only the characters provided by the application in the character fields. If you specify a maximum length (when allowed) or use a predefined type at its maximum length, you don't waste any storage space when your records have fewer characters than the field can hold. The CHAR datatype, however, always adds trailing blanks (if needed) when the supplied data is less than the defined field length. The DATE datatype always uses 7 bytes, one for each date/time component, applying a default specific to each missing component. CHAR Versus VARCHAR2 Besides the storage differences-before being written to the database, CHAR fields are always blank-padded to the full defined length, whereas VARCHAR and VARCHAR2 fields are never padded automatically-the two types of fields sort differently and compare differently. CHAR fields are sorted and compared using their full (padded) length while the variable character fields are sorted and compared on just the characters included in the string. Space management with CHAR and VARCHAR2 Some people prefer to use CHAR rather than VARCHAR2 datatypes to reduce the likeli-hood that rows will grow in length when updates increase the number of characters in a field. Such growth can cause the row to become too long to fit into the block. However, Oracle provides a PCTFREE parameter to allow for row growth. I prefer to use PCTFREE to manage space rather than force all character fields to be padded blank characters, which I consider to be wasted space. See the "Creating Tables for Updates" in this chapter for details on the PCTFREE parameter. A simple test is shown in the following few statements: CREATE TABLE test_padding (fixed_col CHAR(5), var_col VARCHAR2(5)); INSERT INTO test_padding VALUES ('A','A'); INSERT INTO test_padding VALUES ('ABCDE','ABCDE'); SELECT * FROM test_padding WHERE fixed_col = var_col; FIXED_COL VAR_COL --------- ------ABCDE ABCDE Only the row where all five characters have been filled in by the VALUES clause is displayed. The row with the single letters doesn't show the two columns having equal values because the FIXED_COL column is comparing all five characters, including trailing blanks, to the single character from the VAR_COL column.

Numeric Data
Numbers are stored by using the NUMBER datatype. By default, a number field can contain up to 38 digits of precision along with, optionally, a decimal point and a sign. Positive and negative numbers have a magnitude of 1.0 10-130 to 9.99 10125. A number can also have a value of 0, of course. To restrict the magnitude of a number (the number of digits to the left of the decimal point) and its precision (the number of digits to the right of the decimal point), enclose the required value(s) inside parentheses following the NUMBER keyword. If you include only the magnitude, you actually define an integer. Any numbers with decimal values are rounded to the nearest integer before being stored. For example, NUMBER(3) allows numbers in the range of -999 to +999, and an inserted value of 10.65 is stored as 11. If you provide a precision and scale, you can store a number with as many digits as provided by the precision, but only precision-scale digits before the decimal point. Table 7.2 shows some examples of numbers that you can and can't store in a column defined as NUMBER(5,2). Using a negative precision value If you use a negative value in the precision field of a number column's length definition, the numbers will be rounded up to that power of 10 before being stored. For example, a column defined as NUMBER(10,-2) will take your input and round up to the nearest 100 (10 to the power of 2), so a value of 123,456 would be stored as 123,500. How Oracle stores numbers

http://www.informit.com/content/0789716534/element_007.shtml (4 of 21) [26.05.2000 16:47:43]

informit.com -- Your Brain is Hungry. InformIT - Adding Segments for Tables From: Using Oracle8

Oracle stores all numbers, regardless of the definition, by using a mantissa and exponent component. The digits of the mantissa are compressed two digits per byte, so the actual space required to a store a number depends on the number of significant digits provided, regardless of the column definition. Table 7.2 Valid and invalid numbers for a column defined as NUMBER(5,2) Valid Numbers: 0 1 12.3 -12 -123.45 123.456 -12.345 Valid Numbers: 123.4567890123456789 12345 -1234.1 Stored As: 0 1 12.3 -12 -123.45 123.46 (rounded to 2 decimal digits) -12.35 (rounded to 2 decimal digits) Stored As: 123.46 (rounded to 2 decimal digits) Invalid; exceeds precision (5-2 digits before decimal) Invalid; exceeds precision (5-2 digits before decimal)

Handling the year 2000 problem Oracle has always stored both the century and the year for any date value in the database. To help distinguish dates in the 20th and 21st centuries when you provide only the last two digits of the year for the TO_DATE function, Oracle provides the RR date format mask. In a statement that stores a date by using the function TO_DATE('12/12/03',' DD/MM/RR'), the stored date will have the current century if the current year's last two digits are less than 50, and will have the next century if the current year's last two digits are 50 or greater. Full details of the RR format are given in the Oracle8 Server SQL Reference manual.

Date Data
Use the DATE format to store date fields or time information. Oracle has a single 7-byte internal format for all dates, and a 1-byte interval for each century, year, month, day, hour, minute, and second. Depending on the format your applications use to store dates, some fields may be left to default. For the time fields, the defaults result in a time of midnight. For century, either the current year (taken from the operating system date setting) is used, or a choice of 1900 or 2000, depending on the year value. The RR format mask causes the latter behavior, with a year in the range 50 to 99 resulting in a value of 19 for the century, and a year in the range 00 to 49 resulting in a value of 20 for the century. Default date formats Oracle uses a format mask when dealing with dates so that each of the seven components of the combined date/time fields can be uniquely identified. The database runs with a default date mask dependent on the setting of the initialization parameters NLS_TERRITORY and NLS_DATE_FORMAT. Generally, these hide the time component so that it defaults to midnight, unless the application or individual statement decides to override the mask and provide its own values for one or more of the time fields. When you're using date arithmetic and date functions, the time component may not be obvious to you or your users and can cause apparent problems. Entering just the time component causes the date portion to be derived from the current operating system date, which uses the RR format mask process as described for the century as well as the current year and month, and defaults the day to the first day of the month. Oracle can perform extensive date field operations, including date comparisons and date arithmetic. If you need to manipulate dates, check the Oracle8 Server SQL manual for detailed descriptions of the available date operators and functions.

http://www.informit.com/content/0789716534/element_007.shtml (5 of 21) [26.05.2000 16:47:43]

informit.com -- Your Brain is Hungry. InformIT - Adding Segments for Tables From: Using Oracle8

One common problem occurs when you use the SYSDATE function to supply the current date when inserting a new record into the database. This would seem to be straightforward, allowing a query such as to select all orders placed on October 10, 1997, assuming that the date is provided in the correct format. SELECT * FROM orders WHERE order_date = '10-OCT-97' However, because the SYSDATE function by default always inserts the current time as well as the current date (whereas the query provides only the date, meaning that midnight on October 10 is being selected), there will be no matching records in the ORDERS table. The solution would be to store the data as if the time were midnight by applying the TRUNC function to the SYSDATE on insert.

Binary Data
Binary data is stored without any interpretation of embedded characters. For compatibility with Oracle7, the RAW and LONG RAW datatypes are still usable. However, the LONG RAW datatype is being deprecated, meaning that it's gradually becoming unsupported. Oracle8 offers the BLOB and BFILE datatypes to store binary data; these can be used in place of the RAW and LONG RAW datatypes. SEE ALSO For details on large objects (LOBs), The only internal difference between RAW and LONG RAW is the maximum number of bytes they can store. RAW has a maximum length of 2,000 bytes, and you must define the maximum length you need as part of the column definition, even if you need all 2,000 bytes. The LONG RAW datatype can hold a maximum of 2GB. You can't limit this size as you can with RAW column, but, as with variable-character fields, Oracle stores only the characters you supply in RAW and LONG RAW fields, regardless of the maximum possible length. By using some of the datatypes discussed above, you could create a multicolumn table, SAMPLE1, as follows: CREATE TABLE sample1 ( sample_id NUMBER(10), sample_name VARCHAR2(35), owner_id NUMBER(4), collection_date DATE, donor_gender CHAR(1), sample_image BLOB); The new syntax isn't very complicated Compared with the CREATE TABLE command included at the beginning of this chapter, the only other new syntax introduced in the listing, in addition to the datatypes, is the comma that separates each column definition. Tables defined with a variable-length datatype in one or more columns-that is, any datatype other than CHAR or DATE-may need some special consideration when they're created if these columns are likely to be updated during the lifetime of any given row. This is because Oracle packs the data into table blocks as tightly as possible. This tends to result in very little, if any, space being left on the block if a row grows in length due to an update that adds more bytes to an existing field. Before creating a table in which you anticipate updates being made to variable-length columns, read the later section "Setting Space Utilization Parameters" to see how to avoid some of the problems this can cause.

Sizing and Locating Tables


Appendix A of the Oracle8 Server Administrator's Guide provides a detailed description of calculations you can use to compute how big a particular table will be, based on the column definitions, the average record size, and the expected number of rows. Before the most recent releases of Oracle, a table could contain only a limited number of extents, so it was important to know how big the table would be before creating it in order to provide sufficient extents space. With Oracle8, you can have unlimited extents in a table. Consequently, the only reason to be concerned with table size is to determine if you have sufficient disk space to store it. Although I won't try to dissuade you from using Oracle's provided sizing calculations, I want to suggest an alternative approach to predicting table size. One key to either approach is to know the average row size. This generally requires that you find or generate some valid sample data. If you have such data, I recommend that you simply load it into your table, measure how much space is used, and then extrapolate the final size based on the ratio of the number of rows in your sample data to the total number of rows you expect the table to contain.

http://www.informit.com/content/0789716534/element_007.shtml (6 of 21) [26.05.2000 16:47:43]

informit.com -- Your Brain is Hungry. InformIT - Adding Segments for Tables From: Using Oracle8

Use sample data to predict table size 1. Put your sample data into a flat file. 2. Create a SQL*Loader control file and run SQL*Loader to load the data. (See Chapter 25, "Using SQL*Loader and Export/Import," for details on SQL*Loader.) 3. Execute the following command to collect current storage information: ANALYZE TABLE table_name COMPUTE STATISTICS 4. Execute the following query to find the number blocks now storing data: SELECT blocks FROM user_tables WHERE table_name = 'table_name' 5. Compute the total number of blocks required to store the full table by using this formula: BLOCKS X(total number of rows) / (number of rows in sample) You can use the following SQL*Plus script to perform these steps after you load your sample rows: SET VERIFY OFF ANALYZE TABLE &table_name COMPUTE STATISTICS / SELECT blocks * &total_row_count / num_rows AS "Total blocks needed" FROM user_tables WHERE table_name = UPPER('&&table_name') / After you determine your table's maximum size, you can identify an appropriate tablespace in which to store it. Your choice should be based on the factors discussed in Chapter 5 "Managing Your Database Space," concerning tablespace usage. These include a recommendation to use a limited number of different extent sizes, such as small, medium, large, and huge, for all objects in a given tablespace. You should be able to determine from its maximum size which category of extent sizes would best suit it, assuming that you follow our recommendations. For a very large table, the largest extent size is usually preferable, although if the table is going to grow very slowly, you may want to use smaller extents so that you can conserve disk space in the interim. Other factors in deciding on a tablespace include the frequency of backup, the likelihood of dropping or truncating the table in the future, and which other segments exist in the candidate tablespaces. The latter might influence your decision when you consider what else the application might need to have access to, besides the new table, if a data file in the tablespace should become unusable. Permissions when creating a table To create a table successfully, you must have the necessary privileges and permissions, including the CREATE TABLE privilege and the right to use space in the named tablespace. See Chapter 9 "Creating and Managing User Accounts," and Chapter 10 for more information on these topics. When you've determined which tablespace to use, you should add the tablespace name to the CREATE TABLE statement. By using the SAMPLE1 table-creation script shown in the preceding command, let's put the table into the SAMPLE_DATA tablespace: CREATE TABLE sample1 ( sample_id NUMBER(10), sample_name VARCHAR2(35), owner_id NUMBER(4), collection_date DATE, donor_gender CHAR(1), sample_image BLOB) TABLESPACE sample_data / Moves sample1 table into sample.data tablespace

http://www.informit.com/content/0789716534/element_007.shtml (7 of 21) [26.05.2000 16:47:43]

informit.com -- Your Brain is Hungry. InformIT - Adding Segments for Tables From: Using Oracle8

Of course, if the person creating the table has the SAMPLE_DATA tablespace as his or her default tablespace, the TABLESPACE clause isn't needed. If you include it, you guarantee that the table will be created in the desired tablespace no matter who runs the script.

Setting Storage Parameters


You need to concern yourself with a table's storage parameters only if you haven't designed your database's tablespaces according to the guidelines in Chapter 5 These guidelines, along with the correct tablespace selection discussed in the previous section, should allow the table to use the default storage information defined for the tablespace. However, we'll look at each storage option in turn for those of you who may need to consider overriding your defaults. You include these options with the required values in a STORAGE clause as part of the CREATE TABLE or ALTER TABLE commands. Overriding the defaults If you have a database that's not designed as rigorously as discussed earlier, you may need to override one or all of the storage parameters. You can find the current default settings for the storage options in a tablespace by querying the DBA_TABLESPACES data dictionary view.

INITIAL
This parameter sets the size, in bytes, of the first extent built for the table. Some possible criteria for choosing a size include the following: q For fast table scans, the extent should hold the entire table. q For fast parallel table scans, the extent should hold 1/xth of the table, and the rest of the table should be placed in other (x-1) equal-sized extents on different disks. q To load the table using SQL*Loader in parallel, direct path, the extent should be as small as possible because it won't be used. q Fit as much of the table as possible into the largest piece of free space available. This is particularly useful when the tablespace has lots of free space but only small amounts of it are contiguous.

NEXT
This parameter sets the size, in bytes, of the second extent. Some possible criteria for choosing a size include the following: q For fast table scans, the extent should hold all the rows not stored in the first extent. q For fast parallel table scans, the next extent should be the same size as the initial and all subsequent extents, and each extent should be stored on a different disk. q To load the table with SQL*Loader in parallel, direct path, the extent should be large enough to hold all the rows from one parallel loader session. q Fit as much of the table as possible that doesn't fit into the initial extent into the largest piece of remaining free space. This is particularly useful when the tablespace has lots of free space but only small amounts of it are contiguous.

PCTINCREASE
This parameter defines a multiplier to compute the size of the next extent to be created. It's applied to the third, and every subsequent, extent. If you set it to zero (0), each extent will be the same size as defined by NEXT; if you set it to 100, each subsequent extent will double in size. A value of zero is generally to be preferred. You may want to use a non-zero value if you don't know how much your table will grow, so that each extent will be larger than the previous one. This should eventually result in a sufficiently large extent to hold the remainder of the table. PCTINCREASE options

http://www.informit.com/content/0789716534/element_007.shtml (8 of 21) [26.05.2000 16:47:43]

informit.com -- Your Brain is Hungry. InformIT - Adding Segments for Tables From: Using Oracle8

Oracle allows large values of PCTINCREASE to reduce the number of additional extents that might be needed if a table's size was seriously under-estimated when it was first created. In earlier releases, this feature was essential because the number of extents that could be added to an existing table was a finite, limited number. With the UNLIMITED option now available, the only drawback to having many extents is the overhead associated with adding each new one. In general, I recommend leaving the value at zero whenever the table must share a table-space with at least one other segment, to preserve uniform extent sizes. In other cases, you should set it to a reasonable value so that it doesn't begin requiring extents significantly larger than the available disk space. The drawbacks to this include the following: q An attempt to create an extent larger than can be held by any data file in the tablespace. q Irregular extent sizes, leading to irregular free extents if the table is dropped or truncated. q A final extent much larger than required for the number of rows it needs to hold. q Less predictable space consumption, particularly if there are many tables so defined.

MINEXTENTS
This parameter sets the number of extents built by the CREATE TABLE command. The sizes of the extents are determined by the values for INITIAL, NEXT, and PCTINCREASE. There are some possible reasons for creating only one extent initially: q You expect the table to fit into a single extent. q Additional extents will be built by SQL*Loader in parallel, direct mode. q You'll add extents manually to fit them into differently sized free extents. q You'll add extents manually to place them on different disks. There are some possible reasons for creating multiple extents initially: q You have lots of free extents with equal, or nearly equal, sizes, but none large enough to hold the whole table. q Your tablespace is built with many data files, and you want Oracle to spread the extents evenly among them.

MAXEXTENTS
This parameter sets the maximum number of extents the table will be allowed to use. You don't usually need to worry about this value initially because it can be changed later. Of course, you should be prepared to monitor the use of the space as the number of extents in a table approaches this value, no matter how you've set it. In some situations, however, you may need to set a specific value of MAXEXTENTS, including q When you have limited disk space. q When you have fragmented free space in such a way that NEXT and PCTINCREASE can't be set to reasonable values until the next extent is needed. q In a parallel server environment, when you're manually assigning extents to specific instances (see the Oracle8 Server Parallel Server Administration manual for more details on this topic). Additional storage options that don't affect extent sizes You can use other keywords in the STORAGE clause of the CREATE TABLE command-FREELISTS, FREELIST GROUPS, and BUFFER POOL. However, these values can't be set at the tablespace level and don't affect the allocation of table extents. The impact of these storage options is discussed in n other sections of this book. SEE ALSO To see how to use the BUFFER POOL keyword, For more on free lists and free list groups, The buffer pool and table options to use it effectively are covered on We end this section by showing the additional lines added to the CREATE TABLE sample1 script to include a STORAGE clause: CREATE TABLE sample1 ( sample_id sample_name

NUMBER(10), VARCHAR2(35),

http://www.informit.com/content/0789716534/element_007.shtml (9 of 21) [26.05.2000 16:47:43]

informit.com -- Your Brain is Hungry. InformIT - Adding Segments for Tables From: Using Oracle8

owner_id collection_date donor_gender sample_image TABLESPACE sample_data

NUMBER(4), DATE, CHAR(1), BLOB)

The STORAGE clause

Setting Space Utilization Parameters


Each block in an Oracle table can hold as many rows as will fit into the block. When the block is full, Oracle removes it from the list of blocks into which new rows can be inserted. At some point in the future, if enough rows are deleted from the block, it may be added back onto the list of available blocks so that more new rows can be added to it, using the space freed up by the dropped rows. Some special parameters associated with the table's definition, known as space utilization parameters, influence these events. In particular, they control just how full a block becomes before it's moved off the list of available blocks (known as the free list), how much space must be made available before it's moved back onto the free list again, and how much space is reserved for multiple transactions to access the block concurrently. These settings affect how updates are managed, how much space might be going to waste in a table, and how much transaction concurrency can occur on a block.

Creating Tables for Updates


Unless your table contains all DATE and CHAR columns, any update to a record can cause that record to grow or shrink in overall length. This is because Oracle stores only the bytes that contain information in its variable length fields. A new value, if it has more or less data, changes the amount of storage required. Updating a record with smaller field values doesn't cause any problems, unless this happens repeatedly and the block becomes almost empty, thus wasting space. However, if you add to a field's length, you may run out of space on the block. Even if this doesn't happen the first time you perform such an update, it may occur if you continue to increase the lengths of different columns or rows. If you run out of space, Oracle will move the row to another block in the table through a process known as migration. Although this may sound like a benign solution, it does have performance ramifications. A migrated row leaves behind a forwarding address (pointer) so that it can still be found by an index lookup or by an ongoing query. Subsequent access to that row results in a probe of the row's original home block, which finds the forwarding address, and then a probe of the block where it now resides, known as an "overflow block." Rarely will the original block and overflow block be contiguous blocks in the table, so the disk retrieval for such a row will be slow. Migrated rows, particularly if they are numerous, can affect the overall database performance. To help you avoid massive row migration, Oracle lets you reserve space on a block into which the data can expand. By default, this space is 10 percent of the data area of the block. When a change to the block leaves less free space than is reserved for row expansion, the block is taken off the free list and no further inserts will occur. For some tables, the 10 percent default may be perfectly adequate. For other tables, 10 percent may be a completely inadequate amount of free space or far more than is needed. The space is reserved with the PCTFREE parameter, and you should determine a good value for it as best you can before you build any production table. As soon as a table is built you can change this value, but only blocks that aren't being used will adopt the new value and reserve the desired amount of free space. If you know two pieces of information about the data being stored in the table, you can use the following formula to compute a good value for PCTFREE. The information you need is the average length of the rows when they're first inserted and the average length of the rows when they're at their maximum length. In the following formula, the terms avg_insert_length and max_length refer to these values, respectively:

http://www.informit.com/content/0789716534/element_007.shtml (10 of 21) [26.05.2000 16:47:43]

informit.com -- Your Brain is Hungry. InformIT - Adding Segments for Tables From: Using Oracle8

(max_length - avg_insert_length) PCTFREE = 100 * --------------------------------------(max_length) In determining the average row lengths, you need to consider only the number of bytes of data per row, not the internal overhead associated with stored rows and fields. If you use just data lengths, the result will be slightly higher and have a built-in margin of error. The size of this error varies depending on the block size, number of rows in the block, and number of columns per row. For a table with 10 columns and a database block size of 4KB, if 10 rows fit into the block, this margin of error will be just over 3 percent. Example of computing the PCTFREE value If a table has a row with an average length of 153 bytes when it's initially inserted, and it grows by an average of 27 bytes over the course of its time in the table, the average maximum length of a row is 180 bytes. By using these two values in the formula PCTFREE = 100 * (max_length -avg_ insert_length) / (max_length), we find that this table should be created with PCTFREE = 100x(180-153) / 180 = 100x27 / 180 = 100x3/20 = 15. For rows that don't change over time, or change only fixed-length fields, the expression (max_length avg_insert_length) reduces to zero, which in turn causes the entire formula to result in zero. If you're really certain that there will be no updates or just updates that change record lengths, you can set PCTFREE equal to zero without concern for row migration problems. If you have a table in which the value of (max_length - avg_insert_length) is negative, you also shouldn't have to worry about migration if you set PCTFREE to zero. However, in such a table, there will be a tendency for the amount of data on each block to become less than is optimal. This will occur when the block gains sufficient empty space, due to record shrinkage, to hold a whole new row. With many blocks in this state, you'll suffer some inefficiency because of this wasted space; more blocks are being taken to store rows than are really needed. To overcome this, you should consider the table in the same category as tables that undergo record deletions over time, and follow the approach to deal with these in the next section.

Creating Tables with High Delete Activity


Over time, tables that have rows deleted or rows that shrink in size can become inefficient. The empty space on the blocks represents additional disk reading and writing that must be done because empty space, rather than data, is being transferred between disk and memory. Oracle provides the space utilization parameter PCTUSED to help you control the amount of empty space allowed to remain on a block. As mentioned earlier, a block is taken off the free list when it's full so that no further attempts are made to insert more rows into it-that is, when it has less empty space than PCTFREE of the block. PCTUSED sets a threshold value at which the amount of free space becomes sufficient for the block to hold one or more new rows, so it can be placed back on the free list. By default, Oracle sets PCTUSED at 40 percent. In other words, a block that has less than 40 percent of its data area occupied by rows will be put back on the free list. You can change the value of PCTUSED at table creation time or anytime thereafter. As with PCTFREE, the impact of a change to PCTUSED may be delayed. A block that already contains less than the new PCTUSED amount of data, unless it's already on the free list, won't be placed there until another change is made to it. What makes a good value for PCTUSED? The first criterion is to set it so that a block goes back on the free list only when there's room to store at least one more new row. In a very volatile table, where rows are frequently added and dropped, it may be worth wasting space on a block until there is room to fit three or four rows. Moving blocks on and off the free list requires some overhead that may not be worth incurring unless more than one row is affected by the change. After you decide how many rows to leave room for before placing a block back on a free list, you can use the following formula to compute a good value for PCTUSED: PCTUSED = 100 - PCTFREE - 100 * row_space / block_space where row_space = avg_insert_length * rows_needed and block_space = DB_BLOCK_SIZE - 90 - INITRANS * 24 PCTFREE is a space-utilization parameter value discussed in the previous section. avg_insert_length is the average number of bytes in a row when it's first inserted. rows_needed is the number of rows you want to be able to fit into the block before returning it to the free list. DB_BLOCK_SIZE is the database block size, set in the parameter file and found by querying the

q q q

http://www.informit.com/content/0789716534/element_007.shtml (11 of 21) [26.05.2000 16:47:43]

informit.com -- Your Brain is Hungry. InformIT - Adding Segments for Tables From: Using Oracle8

V$PARAMETER table. INITRANS is a space-utilization parameter value discussed in the next section.

Constants used in computing PCTUSED value The constant 90 is an imprecise measure of the space used by Oracle's header information in a table block, but it has proven to be sufficiently accurate for this calculation. The constant 24 is the number of bytes used to store a transaction entry on a typical hardware platform, and should be adequate for this calculation. Following from the example we used to demonstrate the computation for PCTFREE, let's see how this formula would work if we wanted to insert new rows when a block had room for three new rows. In the earlier example, the average length of a row when initially inserted was 153 bytes, and the value for PCTFREE was calculated at 15. Let's use a block size of 4KB and an INITRANS value of 4 to complete the PCTUSED calculation. So we need to compute PCTUSED = 100 - 15 - 100 * (153 * 3) / (4096 - 90 - 4 * 24) 100 is a constant for computing the percentage. 15 is the value for PCTFREE. 153 is the number of bytes required to store an average row when inserted. 3 is the number of rows we need to have room to store before returning the block to the free list. 4096 is the number of bytes on a 4KB block. 90 is a constant representing the number of bytes used for block overhead. 4 is the value of INITRANS. 24 is a constant representing the number of bytes taken by a transaction entry.

q q q q q q q q

This simplifies to the following: PCTUSED = 85 - 100 * 459 / (4006 - 96) = 85 - 100 * 459 / 3910 If we round the quotient 459/3910 (= 0.1173913) up to 0.12, the result becomes the following: PCTUSED = 85 - 100 * 0.12 = 85 - 12 = 73 The second consideration is how much space you can afford to spare. The lower the PCTUSED value you use, the more empty space will accumulate on a block before it's recycled onto a free list for more data to be added. In very large tables, you may not be able to afford to store blocks with more than a minimal amount of free space. In such cases, even though you may cause additional overhead by moving blocks back onto the free list more often than you might think you need from the preceding formula, you may gain some benefits. Not only will you save disk space, but if the table is queried extensively-particularly when using full table scans-you'll need to read less blocks into memory to retrieve the same number of rows.

Creating Tables for Multiple Concurrent Transactions


For a transaction to add, alter, or drop a row from an Oracle table, it must first obtain a lock on that row. It does this by first registering itself on the block where the row will reside or now resides. The registration is made by updating a special area of the block called the transaction entry slot, also known as an intent to lock (itl) slot. If a block contains a lot of rows, it's conceivable that more than one transaction will want to work on the same block at the same time. To do this, each must obtain a transaction slot for its own use. Oracle allows up to 255 transaction slots to be created on a single block, but by default it builds one only when a block is added to a table. When additional slots are needed, the Oracle server process needing the slot has to rearrange the contents of the block to make room for the new slot. If you want to avoid this behavior, you can create a table that will contain more transaction slots on each block as the blocks are added. You can also limit the upper number of slots that can be created, preserving the space for additional row data, albeit at the cost of possibly making users wait for a transaction slot on a very busy block. You control the allocation of transaction slots with the INITRANS and MAXTRANS space utilization parameters. With INITRANS, you set the number of transaction slots that each block acquires by default. With MAXTRANS, you set an upper limit on the total number of such slots that can be assigned to the block. The difference, MAXTRANS minus INITRANS, is the number of slots that can be added dynamically if needed. Usually, there's no real need to change the default value for MAXTRANS. Even if you have hundreds of concurrent transactions working against the same table, they're likely to be working on different blocks simply
http://www.informit.com/content/0789716534/element_007.shtml (12 of 21) [26.05.2000 16:47:43]

informit.com -- Your Brain is Hungry. InformIT - Adding Segments for Tables From: Using Oracle8

because most blocks don't have room for that many rows. In the rare situation where tens of concurrent transactions all need the same block, they'll probably have to wait for one of the other transactions to release the row-level lock before they can do any work. It's in this case that you might want to set MAXTRANS. Otherwise, each transaction will build itself a transaction slot that it will then occupy idly until it can get to the row it needs. These slots represent wasted space on the block. You might want to change INITRANS, however, if your can predict that more than one transaction will likely need the same block at the same time. By preallocating the necessary number of transaction slots on each block, you'll help the second and subsequent user get to their resources sooner. Each slot requires about 24 bytes, so don't set the value of INITRANS too high. Otherwise, you'll be taking space that could be occupied by row data. Adding space utilization parameters to the example SAMPLE1 table requires further modifications to our table-creation command: CREATE TABLE sample1 ( sample_id sample_name owner_id collection_date donor_gender sample_image TABLESPACE sample_data STORAGE ( INITIAL NEXT PCTINCREASE MAXEXTENTS

NUMBER(10), VARCHAR2(35), NUMBER(4), DATE, CHAR(1), BLOB)

5M 5M 0 50)

Space utilization parameters

Building Tables from Existing Tables


One option you may want to exercise is to build a table from the definition of, or the full or partial contents of, another table. Oracle allows you to do this through an AS SELECT clause to the CREATE TABLE command. The table definition can look just the same as the examples you've seen to this point, except that the datatype isn't included in the column list because the type is inherited from the original table. In fact, if you also want to keep the column names in the new table the same as they are in the original table, you can omit the column list completely. As with the versions of the CREATE TABLE command you've seen listed, you can also include or omit the entire STORAGE clause, or just include it with the required parameters; you can include or exclude any or all space utilization parameters; and you need to include the TABLESPACE clause only if you want the new table to be created somewhere other than your default tablespace. The AS SELECT clause can include any valid query that will retrieve columns and rows to match the new table's definition. The columns named in the SELECT clause must match the column list, if any, for the new table. If the new table doesn't have a column list, all the columns from the original table are used in the new table. The SELECT clause can optionally include a WHERE clause to identify which rows to store in the new table. If you don't want any rows stored, include a WHERE clause that never returns a valid condition, such as WHERE 1 = 2. The following shows three different variations of the CREATE TABLE...AS SELECT statement, each one producing a different table definition from the same base table we have been using, SAMPLE1. REM Create SAMPLE2, an exact copy of SAMPLE1, in tablespace REM SPARE, using default storage and no free space in the REM blocks.

http://www.informit.com/content/0789716534/element_007.shtml (13 of 21) [26.05.2000 16:47:43]

informit.com -- Your Brain is Hungry. InformIT - Adding Segments for Tables From: Using Oracle8

REM

SAMPLE2 is based on entire SAMPLE1 table

REM REM REM REM REM

Create SAMPLE3, containing just the ID and IMAGE columns, renamed, from SAMPLE1, placing it in the IMAGE tablespace with unlimited l00MB extents and default space utilization parameters.

SAMPLE3 is based on two renamed columns from SAMPLE1 table

REM REM REM REM REM REM

Create SAMPLE4 containing all but the IMAGE column from SAMPLE1, and only selecting records from the past year. Use the DEMOGRAPHIC tablespace with default storage, zero free space, a block reuse threshold of 60 percent, and exactly 5 transaction slots per block.

http://www.informit.com/content/0789716534/element_007.shtml (14 of 21) [26.05.2000 16:47:43]

informit.com -- Your Brain is Hungry. InformIT - Adding Segments for Tables From: Using Oracle8

SAMPLE4 is based on all but one column from the SAMPLE1 table and includes only a subset of rows

Monitoring Table Growth


Although you try to determine a table's overall size when it's created to evaluate disk requirements, the actual size may well vary from the predicted size. You should monitor the database tables to ensure that they won't run of out space in the course of normal business. Similarly, you may want to check that the space taken by the table's extents is being used efficiently and that there aren't a lot of empty or near-empty blocks. You may also want to confirm that the PCTFREE value is set appropriately by looking for migrated rows. The ANALYZE command collects statistics and stores them in the data dictionary for you. These statistics include the number of blocks used, the amount of unused space per block, the number of empty blocks, and the number of migrated or chained rows. A chained row is one that's simply too large to fit into a single block, and thus will always be spread across multiple blocks. The data dictionary, unfortunately, doesn't distinguish between chained rows and migrated rows, the latter being rows that get longer through the use of UPDATE commands and don't have sufficient space on their block for the growth. If the average row length, also shown in the dictionary, is less than the space available on a block (the block size minus the header and transaction slot space), the rows are most likely migrated, not chained. Row chaining versus migration If a row is too big to fit into a single block, the first part of the row is stored in one block and the rest of the row is stored in one or more overflow blocks. Each part of the row is known as a row piece, and the first row piece is counted as the row's location for any index entry or when it's examined by the ANALYZE command. When a row migrates because it no longer fits into its original block, a pointer is left behind in the original block to identify the row's new location. This pointer is treated as the row's location so that any index entries pointing to the row don't have to be updated. The ANALYZE command treats this pointer as an initial row piece and doesn't distinguish it from a row piece belonging to a chained row. This is why the results of the ANALYZE command don't distinguish between chained and migrated rows. You can use ANALYZE TABLE table_name COMPUTE STATISTICS or ANALYZE TABLE table_name ESTIMATE STATISTICS to collect the statistics stored in the DBA_TABLES (and USER_TABLES) data dictionary views. The former command will always give accurate results; the latter will be a good, but not precise, estimate. The COMPUTE option takes longer as the table grows in size, so you may prefer to estimate statistics for your large tables. You can select the percentage of the table or the number of rows you want to include in the estimate with the SAMPLE clause, using SAMPLE x PERCENT or SAMPLE x ROWS (where x is the percentage or the row count, respectively). When you collect statistics, you may affect how the database performs optimization to determine statement execution plans. If you want to ensure that rule-based optimization is used by default, you should execute the
http://www.informit.com/content/0789716534/element_007.shtml (15 of 21) [26.05.2000 16:47:44]

informit.com -- Your Brain is Hungry. InformIT - Adding Segments for Tables From: Using Oracle8

ANALYZE TABLE...DELETE STATISTICS command after you examine the statistics. If you want to spend more time reviewing the statistics, you can save the results by executing a CREATE TABLE...AS SELECT command against the data dictionary table. In fact, if you do this after you run the ANALYZE command to collect new statistics, using a different table to store the results each time, you will build a history of the table's growth and data distribution. Once you have saved the statistics into a table, you can go ahead and execute the DELETE STATISTICS option to remove them from the base table definition.

Managing Extent Allocation


There are two reasons to monitor how many extents exist in a table: q You don't want the table to reach its MAXEXTENTS number of extents and thus fail during the execution of a command. q This may be a table where, for whatever reason, you want to add extents manually, and so you maintain MAXEXTENTS at the current number of extents to avoid dynamic allocation. Reasons to add table extents manually There are a number of reasons for manually adding extents to a table, although relying on automatic allocation generally makes your work a lot easier. Some of the more common reasons for manual allocation include over-coming a shortage of space in the tablespace that doesn't allow you to choose a good value for the NEXT extent to be allocated automatically; placing the extent in a data file of your choice, which allows you to spread the storage around different data files and, presumably, different disk drives; allocating the extent to a specific instance's free list groups if you're using the parallel server option; and ensuring that you fit the largest extent possible into the given space. To see the number of extents in table, you can query DBA_EXTENTS for the given table (segment) name. If you use the COUNT(*) value in the SELECT clause, the result will show the exact number of extents owned by the table. If you want to see other information, such as the data files in which the extents are stored, you can query other columns in this table. To allocate an additional extent to a table for which you're using manual allocation, issue an ALTER TABLE...STORAGE (MAXEXTENTS x) command, where x is one more than the current number of extents. You can then add an extent with the ALTER TABLE...ALLOCATE EXTENT command. The following script manually adds the 12th extent to the SAMPLE10 table, using the third data file in the USR_DATA tablespace: ALTER TABLE sample10 STORAGE (MAXEXTENTS 12) / ALTER TABLE sample10 ALLOCATE EXTENT ( FILE 'c:\orant\database\samples\usr_data3.ora') / For tables to which extents are being added automatically, you simply need to ensure that MAXEXTENTS stays larger than the current number of extents. By monitoring the table over time, you should be able to predict how fast extents are being added and increase the MAXEXTENTS value before the current limit is reached. You use the same command that appears at the start of the preceding script to change the extent limit.

Removing Unused Space


There are two types of unused space in a table: q Blocks that have been allocated but never used for data. q Blocks that have become empty or partially empty over time due to row deletions or smaller values updated in variable-length fields. If you have blocks of the first type-that is, blocks that have never been used-and don't expect this space to be needed, you can remove it with the DEALLOCATE option of the ALTER TABLE command: ALTER TABLE table_name DEALLOCATE UNUSED If you expect some but not all allocated space to be needed, you can drop a portion of the extra space by using a further option:

http://www.informit.com/content/0789716534/element_007.shtml (16 of 21) [26.05.2000 16:47:44]

informit.com -- Your Brain is Hungry. InformIT - Adding Segments for Tables From: Using Oracle8

ALTER TABLE table_name DEALLOCATE UNUSED KEEP integer [K|M] This removes all but integer bytes (or K (kilobytes) or M (megabytes)). To reclaim the other type of free space-space that has been released by DML activity-you can try increasing the PCTUSED value for the table, as discussed earlier. This will allow blocks to be returned to the free list and used for future new rows sooner than they have been. However, if the table is fairly static and not many more changes will be made, the blocks that are already partially empty can't be touched again and won't be returned to the free list. Even if they were, there might not be enough new rows added to fill all the reusable space. In this case, you may have to rebuild the table. You can rebuild a table in a number of ways: q Use the Export/Import utilities explained in Chapter 25.
q

Dump the records into an external file and use SQL*Loader (also explained in Chapter 25) to reload them. If you have room, you can move the records to a temporary table, truncate the original table, and move the records back again, as shown in the following for the SAMPLE10 table: CREATE TABLE temp AS SELECT * FROM sample10 / TRUNCATE TABLE sample10 / INSERT INTO sample10 SELECT * FROM temp / DROP TABLE temp /

Using Views to Prebuild Queries


Although you can create tables from the contents of other tables, as discussed in the previous section, in many cases there's no need to consume the space required by the copy just because you need to see a variation of the original table. A view is built in the same way that the AS SELECT clause is used to build a table from the contents of another table. With a view, rather than the data be copied to a new location, only the definition of required data is stored in the data dictionary. This amounts to storing the SELECT statement or, to put it another way, storing a query definition. Views have many uses. The following sections show you how to build a view to meet a specific need you may have. We use a simple EMPLOYEE table as the basis for the examples in these sections. The following shows the CREATE statement for this table: You can use views without degrading performance Consider using a view whenever you think it could be useful, for any purpose whatsoever. There is very little overhead involved in storing the definition or in executing a statement against the view. CREATE TABLE employee ( id NUMBER(8) CONSTRAINT employee_id_pk PRIMARY KEY, last_name VARCHAR2(35), first_name VARCHAR2(30), middle_initial CHAR(1), department NUMBER(5) CONSTRAINT employee_department_fk REFERENCES department_table, salary NUMBER(10,2), title VARCHAR2(20), phone NUMBER(5) CONSTRAINT employee_phone_fk REFERENCES phone_table, hire_date DATE) / For information on the constraints contained in this table definition, see Chapter 17, "Using Constraints to
http://www.informit.com/content/0789716534/element_007.shtml (17 of 21) [26.05.2000 16:47:44]

informit.com -- Your Brain is Hungry. InformIT - Adding Segments for Tables From: Using Oracle8

Improve Your Application Performance."

Changing Column Names with Views


If your table has column names that follow a naming convention-such as a corporate standard or one based on the vocabulary of the primary users-the names may not be meaningful to other users of the table. For example, the Human Resource Department may talk about an employee ID number, the Payroll Department may refer to the same number as a "payroll number," and the Project Scheduling System may use the term "task assignee." By using views, the same employee table can be used to provide this number with the preferred name for each group. The following script, based the EMPLOYEE table created in the preceding script, shows a view being created for the Payroll Department to give the ID column the name EMPLOYEE_ID, leaving all other columns with their original names: CREATE VIEW pay_employee ( payroll_number, last_name, first_name, middle_initial, department, salary, title, phone, hire_date) AS SELECT * FROM employee /

Dropping Columns with Views


Once in a while, you may find that a column originally defined in a table is no longer needed. The current version of Oracle8 doesn't allow you to drop such a column. Instead, I recommend that you set the column value to NULL in the entire table, which will free up the storage consumed by the column. This doesn't help users who may not know about the column's existence when they try to use the table. An INSERT statement, for instance, would fail if they didn't include a value for the dropped column. If you create a view that excludes the missing column, the view can now be used in place of the table and the dropped column will no longer be a problem. Updating views may not always be possible If you have views that participate in table joins, your users may not be able to update them or perform other DML commands on them. For detailed information on the rules governing updatable views, see the later section "Updating Data Through Views." To make the column's disappearance even more transparent to the users, you can first rename the table and then use the original table name to name the view that excludes the unwanted column: 1. Issue the following command to prevent table access under the old name: RENAME table_name TO new_table_name; 2. Build the required view with this command: CREATE VIEW table_name (...) AS SELECT ... FROM new_table_name; Make sure that you name all but the unwanted column in the list of column names (shown as in the preceding statement) and use the new name for the table in the FROM clause. 3. Grant the same permissions on the view that existed on the table. The following CREATE VIEW statement will result in a view that apparently removes the PHONE column from the EMPLOYEE table: CREATE VIEW emp AS SELECT id, last_name, first_name, middle_initial, department, salary, title, hire_date

http://www.informit.com/content/0789716534/element_007.shtml (18 of 21) [26.05.2000 16:47:44]

informit.com -- Your Brain is Hungry. InformIT - Adding Segments for Tables From: Using Oracle8

FROM employee /

Hiding Data with Views


Some tables may contain sensitive data that shouldn't be seen by all users, or rows that aren't useful for some parts of the user community. So, although you may need certain users to see all the contents of a table, others should see only certain columns or rows, or even a subset of rows and columns. You can accomplish this by creating a view that contains only the elements that should be seen by the selected users and then granting them access to the view rather than to the table. For a view that contains a subset of the columns, you can use the same approach as you would to create a view to hide a dropped column (the example in the preceding section shows the creation of such a view). A view that shows users only a subset of the rows is built by using an appropriate WHERE clause. You can restrict column and row access by building a view with a SELECT clause to identify just the required rows and a WHERE clause to choose the desired rows. The following command shows the creation of such a view. It's based on the EMPLOYEE table from the earlier section "Using Views to Prebuild Queries," but includes only employees in Department 103. Therefore, it doesn't show the department column, nor does it include salary information: CREATE VIEW dept_103 AS SELECT id, last_name, first_name, middle_initial, title, phone, hire_date FROM employee WHERE department = 103 /

Hiding Complicated Queries


Your users and applications may need to execute fairly complicated queries that contain multiple table joins, or subqueries, and combinations of these. If such a query is needed on a regular basis, you might consider creating a view that embodies the query. The user or application can then simply query the view without being concerned with the complexity of the underlying query. This will reduce the possibility of error as well as save time. The following code builds a view that could be used for an online phone directory service based on the EMPLOYEE table (for the name and phone number) and the DEPARTMENT table, which it references (for the department name). Although it's not very complicated in terms of the number of columns and tables involved, it does provide a standard format for the output, using SQL functions and operators: CREATE VIEW phone_list (name, department, phone) AS SELECT UPPER(last_name) || ', ' || INITCAP(first_name) || ' ' || UPPER(middle_initial) || '.', department_name, e.phone FROM employee e, department d WHERE department = d.id / Although the phone listing might be usefully ordered by the NAME field, a view can't contain an ORDER BY clause. A query against the view PHONE_LIST (created in the previous command) to show an alphabetical listing of all employees' phone information would have to include its own ORDER BY clause. The command would be SELECT * FROM phone_list ORDER BY name; The ordering column is the name from the view definition, not from the columns in the base table on which it's based.

Accessing Remote Databases Transparently with Views


To access a table on a remote database, a statement needs to identify the table name plus a database link name

http://www.informit.com/content/0789716534/element_007.shtml (19 of 21) [26.05.2000 16:47:44]

informit.com -- Your Brain is Hungry. InformIT - Adding Segments for Tables From: Using Oracle8

for Oracle to find the correct remote database and table. (For information on database links, see the Oracle8 Server Distributed Systems manual.) The link name is concatenated to the table name with a commercial "at" (@) symbol. To hide this structure from users and applications, you can create a view that embodies the table and link name. If you needed to reach the EMPLOYEE table in San Francisco from a different database on the network, you could create a database link named SF to point to the San Francisco database and then build a view to hide this link's use. The following shows one version of the command to build the link and then the command to build the view: CREATE DATABASE LINK sf CONNECT TO emp_schema IDENTIFIED BY emp_password USING 'sfdb' / CREATE VIEW employee AS SELECT * FROM employee@sf / Obviously, you can create a view to meet any one of a number of requirements. In some cases, you may need a view to help with a number of issues. There's no reason that the view to access the remote EMPLOYEE, created in the preceding code, couldn't also restrict access to the salary column while renaming the ID column TASK_ASSIGNEE.

Creating and Handling Invalid Views


In some circumstances, you may need to create a view but find that you don't have the permissions on the base table, or that the table you know should exist isn't available. In such cases, you can create an invalid view that will remain unusable until the underlying table(s) is accessible by you. To do this, use the keyword FORCE following the CREATE keyword in your command to define the view. A view can also become invalid at a later time due to changes in the underlying table. Whenever a view is invalid, Oracle will return an error message if you try to execute a statement that refers to the view. After the underlying problem is fixed, the view should work normally again.

Dropping and Modifying Views


Dropping a view simply requires issuing the DROP VIEW command to remove the view definition from the data dictionary. No space will be recovered because a view doesn't own any data. Once in a while, you may need to change the definition of a view. Although this can be done by dropping the view and then re-creating it with the new definition, this might not be a desirable approach. If the view has been granted to users, dropping the view will lose the privileges. Dropping a view that's referenced in triggers or stored procedures causes these objects to be invalidated, requiring a recompile attempt next time they're used, even if you add the new view definition immediately. To preserve the integrity of the users and objects dependent on a view, you can modify the view's definition without having to drop it first. This is done with the OR REPLACE option of the CREATE command. By issuing the CREATE OR REPLACE VIEW... command, you can use a different SELECT clause from that in the existing view without the view disappearing from the data dictionary at any time. You can even modify a view such that a valid view becomes invalid or an invalid view becomes valid, or modify an invalid view to become a different invalid view. If the resulting view would be invalid, you must include the FORCE keyword after the CREATE OR REPLACE clause. Otherwise, the view won't be changed from its previous definition. Don't expect users to be able to use invalid views Just as when you create a view with the FORCE option, any modification that requires the FORCE keyword or that otherwise makes a view invalid renders the view unusable. Nobody can use the view name in a SQL statement successfully until the view is revalidated.

Updating Data Through Views


Although views are used primarily for queries, they can be used for other types of SQL statements. Rows can be inserted and deleted via a view on a single table when there are no set or DISTINCT operators in the view, nor any GROUP BY, CONNECT BY, or START WITH clauses. For row inserts, no columns excluded from the view can be defined as NOT NULL or be part of the PRIMARY KEY constraint (which implies NOT NULL). Updates can also be performed via a view on a single table and, in some cases, through a view on joined tables.

http://www.informit.com/content/0789716534/element_007.shtml (20 of 21) [26.05.2000 16:47:44]

informit.com -- Your Brain is Hungry. InformIT - Adding Segments for Tables From: Using Oracle8

For a single table, updates are limited by the same restrictions as inserts and deletes. In the case of a view across a join, only one table can be updated in a single statement. Furthermore, there must be a unique index on at least one column in the joined view, and the columns from the table being updated must all be updatable. To see whether a column in a view is updatable, you can query the table DBA_UPDATABLE_COLUMNS (or USER_UPDATABLE_COLUMNS). Understanding Oracle terminology for updatable join views Oracle uses the term key-preserved tables when discussing the update options on views involving table joins. A table is key-preserved in a join view if every key of the table, whether or not it's included in the view's SELECT clause, would still be a valid key following a change to the columns seen in the view. Only key-preserved tables can be updated through the view.

View Consistency
As we've seen, you can create views to restrict the visible rows in the base table. Also, you've learned that you can update a view on a single table. One concern you might have is how these two characteristics work together. Suppose that I use the view DEPT_103, created earlier in the section "Hiding Data with Views," and update it. If I update the employee's title, there shouldn't be a problem. But what if I update one record to change the department number to 242? Now the row doesn't belong to the view and may not be a row I can officially see. You can add a refinement to views that restrict access to certain rows within a table. This refinement prevents users from modifying a row that they can see through the view to contain a value that they aren't allowed to see. This is done by including the key phrase WITH READ ONLY or WITH CHECK OPTION to the view definition. WITH READ ONLY doesn't allow any changes to be made to the base table through the view, so you can't perform an insert or a delete, or complete any updates, on the underlying table. WITH CHECK OPTION, on the other hand, does allow any of these operations as long as the resulting rows are still visible under the view definition. If you want, you can give WITH CHECK OPTION a name by using a CONSTRAINT keyword, just as for other types of constraints (see Chapter 17). The following command shows how you can create a view with a named CHECK OPTION: CREATE VIEW dept_103 AS SELECT id, last_name, first_name, middle_initial, title, phone, hire_date FROM employee WHERE department = 103 WITH CHECK OPTION CONSTRAINT dept_103_dept_ck / The name given to the CHECK OPTION here follows a suggested naming standard developed for constraints. SEE ALSO For more information on naming constraints, < Back Contents Next >

Save to MyInformIT

Contact Us | Copyright, Terms & Conditions | Privacy Policy | Advertising Copyright 2000 InformIT. All rights reserved.

http://www.informit.com/content/0789716534/element_007.shtml (21 of 21) [26.05.2000 16:47:44]

informit.com -- Your Brain is Hungry. InformIT - Adding Segments for Different Types of Indexes From: Using Oracle8

You are here : Home : Using Oracle8

Exact Phrase All Words Search Tips

Adding Segments for Different Types of Indexes


< Back Contents Next > Save to MyInformIT From: Using Oracle8 Author: David Austin Publisher: Que More Information

Why Index?
r

The Mechanics of Index Block Splits Sizing an Index Creating an Index Unique Indexes Parallel Operations on Indexes Logging Index Operations Index Tablespaces Index Space-Utilization Parameters Creating Indexes at the Right Time Monitoring Space Usage Rebuilding an Index Dropping an Index Bitmap Index Internals Using Bitmap Indexes Building a Bitmap Index Creating a Reverse-Key Index Rebuilding Reverse-Key Indexes Why Index-Organized Tables Don't Support Additional Indexes Creating an Index-Organized Table Monitoring Index-Organized Tables

Managing a Standard B*Tree Index


r r r

Index Sort Order


r r r r r r r r

Managing Bitmap Indexes


r r r

Managing Reverse-Key Indexes


r r

Managing Index-Organized Tables


r r r

Why Index?
Although the primary reason for adding indexes to tables is to speed data retrieval, you may use indexes for these additional reasons:

http://www.informit.com/content/0789716534/element_008.shtml (1 of 17) [26.05.2000 16:48:03]

informit.com -- Your Brain is Hungry. InformIT - Adding Segments for Different Types of Indexes From: Using Oracle8

Why indexes are important Imagine looking for a document in a filing cabinet that contains documents in a random order. You might have to look at each and every document before finding what you're looking for. The effort required to find the document will increase as the size of the filing cabinet and the number of documents within it increases. A database without an index is similar to such an unorganized filing cabinet. More than 50 percent of systems reporting a performance problem suffer from lack of an index or from the absence of an optimum index.
q q q q

To enforce uniqueness with a constraint To store data in an index cluster To reduce locking contention on a foreign key constraint To provide an alternate source of data

The first and third items are discussed in more detail in Chapter 17, "Using Constraints to Improve Your Application Performance," which is devoted to integrity constraints. Index clusters, mentioned in the second bullet, are covered in Chapter 18, "Using Indexes, Clusters, Caching, and Sorting Effectively." The fourth item needs some additional comments here. SEE ALSO To learn about using indexes with unique constraints, How to use indexes with foreign key constraints, How to create and manage index clusters, Oracle tries to avoid accessing any more blocks than necessary when executing SQL statements. If a query is written in such a way that an index can be used to identify which rows are needed, the server process finds the required index entries. The process usually uses the rowids-pointers to the file, block, and record where the data is stored-to find the required block and move it into memory, if it's not already there. However, if the columns in the query's SELECT clause are all present in the index entry, the server process simply retrieves the values from the index entry and thus avoids the additional block search in the table itself. This technique, the biggest benefit of which is saving time, can also help out if there's a problem with the base table or the file where it's stored. Queries that can be satisfied from index entries will continue to function, even if the base table is unavailable. Figure 8.1 explains the basic concept of locating data with an index. The user looking for specific information first looks for a keyword in the index. This keyword can be easily located because the index is sorted. The index contains the keyword with the detailed information's address. The desired data is quickly located by using this address information. Figure 8.1 : Indexes provide a quick access path to the data. An Oracle index is a structure, maintained as an independent segment, that contains an ordered set of entries from one or more columns in a table. These ordered entries are stored on a set of blocks known as leaf blocks. To provide fast access to any specific value in these leaf blocks, a structure of pointers is also maintained in the index. These pointers are stored on branch blocks. Each branch block contains pointers for a specific range of indexed values. The pointers themselves may point to a leaf block where the value can be found, or to another branch block that contains a specific subset of the value range. Oracle uses a b*tree index structure, which guarantees that the chain (or number of blocks that must be examined to get from the highest level branch block to the required leaf block) is the same no matter what value is being requested. The number of blocks, or levels, in such a chain defines the height of a b*tree. The larger the height, the greater the number of blocks that have to be examined to reach the leaf block, and consequently, the slower the index. Figure 8.2 shows the logical structure of a b*tree index. Figure 8.2 : A b*tree index consists of a set of ordered leaf blocks with a structure of branch blocks to aid navigation to the leaves. When a leaf block fills up, an empty block is recruited to be a new leaf block; some records from the full block are moved into this new block. This activity is called "splitting a block." The branch block pointing to the original leaf block adds a new entry for the split block. If the branch block doesn't have room for the new entry, it also splits. This, in turn, requires that the branch block pointing to it needs to add a new entry for the split block. The very first branch block, called the "root block," is at the top of the index. If it fills up, it too will split, but in this case the original root block and the split block become the second level in the b*tree. A new root block is created, pointing initially to the two blocks that are now at the next level.

http://www.informit.com/content/0789716534/element_008.shtml (2 of 17) [26.05.2000 16:48:03]

informit.com -- Your Brain is Hungry. InformIT - Adding Segments for Different Types of Indexes From: Using Oracle8

The Mechanics of Index Block Splits


Most indexes will fall into one of two categories-new entries will be randomly inserted between existing values, or new entries will always have a larger value than any existing value. The former is most often found on columns of character data, such as a last name column. The latter would typically be an index on a numeric primary key column, where a sequence generator or some other tool is used to increment the value of each new entry. In indexes where values are inserted in an apparent random order, very few entries will be stored on the very last leaf block, but are placed on a leaf block somewhere in the sorted sequence of leaf blocks. When one of these blocks fills up and has to be split, Oracle can't make any assumptions about future entries that might be added in the same value range. Therefore, it splits the block in the middle, leaving the first 50 percent of the entries on the current block and moving the second 50 percent of the entries to the new leaf block. This 50-50 split maximizes the available space for new entries that, statistically, are just as likely to fall in the first part of the range as in the second. Splitting the last leaf block The algorithm for splitting the last index leaf block is optimal when the index is on a column that contains ever-increasing values, such as sequence numbers. Only the last entry is moved to the split block, leaving the preceding entries in place. On rare occasions, the index entry on a column containing unordered data, such as last names, causes the last leaf block to split. If the value is currently the highest in the index in such cases, the split will place just this entry on the new leaf block. This can lead to a further block split if other values need to be inserted in the former last leaf block. Although this is less efficient than splitting the block 50-50 the first time it fills, it occurs too infrequently to be a significant performance factor. When a new index entry has a higher value than other current entries, it will be added to the last leaf block in the index. When this block fills up and needs to be split, Oracle may not perform a 50-50 block split. If the new entry is the highest value so far, Oracle simply adds a new leaf block and stores just the new record on it. In indexes where all new entries have higher values, this scheme will provide the maximum possible space on the new leaf block. That is where all new entries will be stored. It also saves the overhead of moving 50 percent of the entries from the current leaf block, which would free up space that would never be used anyway.

Managing a Standard B*Tree Index


The most common type of index is a standard b*tree index. We will take some time examining the various characteristics of this index type and learn how to build and manage one. If you've read through Chapter 5 "Managing Your Database Space," or Chapter 17, you're already introduced to some index issues. Chapter 5 emphasizes the general usefulness of placing indexes in their own tablespaces, away from the tables on which they're built. Chapter 17 briefly discusses the management of indexes required to support certain types of constraints. SEE ALSO For information on tablespace usage for different segment types, To see when to use b*tree indexes, Column order in composite indexes can differ from the table The columns in a composite index may be defined in any order, irrespective of their order in the base table definition. They don't even have to be built on columns adjacent to each other in the underlying table. Before proceeding, you need to be introduced to two terms that appear throughout this chapter. A composite index, also called a concatenated index, is an index that you create on multiple columns within a table.

Sizing an Index
As mentioned in Chapter 5 an index is typically smaller than the table on which it's based. This is because the index generally contains only a subset of the columns from the table and thus requires less storage for each entry than is taken by an entire row. If the index is on many or even all columns, though, it will almost certainly be larger than the table it's indexing. This is because each index entry stores not just the data from the table's columns, but also a rowid, which embodies the physical location of the row in the table. In addition, the index stores one or more branch blocks at every level of the b*tree structure. A table does not need any such blocks; it stores only the data itself.

http://www.informit.com/content/0789716534/element_008.shtml (3 of 17) [26.05.2000 16:48:03]

informit.com -- Your Brain is Hungry. InformIT - Adding Segments for Different Types of Indexes From: Using Oracle8

Another space issue for an index is the use of empty space. When new rows are being added to a table, the rows can be stored on any available block. As blocks fill, rows are placed on empty blocks and continue to be stored there until they too get full. This can't be done with index entries because they're stored in a specific order. Even if a block is almost empty, Oracle can't store an index entry on it if the entry doesn't belong in the range of values now assigned to that block. Following a block split, an index will have two partially full blocks, neither of which can be used for entries outside either block range. A table, on the other hand, doesn't have to move records around when blocks fill up; it simply adds new records to empty blocks, which it can then continue to fill with other new rows, regardless of their values. As discussed in Chapter 7 "Adding Segments for Tables," Oracle provides detailed descriptions of algorithms that you can use to estimate the total space required by various types of segments. As with computations for table sizes, some of the numbers you have to plug into the formulae are estimates. These include the average lengths of fields stored in indexed columns and estimates of how many rows will have NULLs in the indexed columns, if any. The calculations don't take into account how many block splits might occur while data is being added to the index, so they become less and less reliable for sizing long-term growth. The space required for branch blocks is included for in the calculations, but it's simply based on an expected ratio of branch blocks to leaf blocks. The actual number of branch blocks required depends on the number of distinct values in the index, a factor that isn't included in the space estimate. Sizing an index with sample data I recommend that, if you have good sample data, you should consider building a test index with this data and extrapolate the full index size based on the size of the test index. If you don't have good sample data, it's a somewhat pointless exercise to evaluate the index size by using Oracle's calculations- your input will be a guess. You don't really need to know how big an index will be before you create one, unless you're very short of disk space. In this case, you should ensure that you'll have sufficient space for the entire index. Unlike a table, you rarely need to read an entire index from start to finish, so there's no real requirement to keep its blocks on contiguous disk space. Therefore, you don't need to define large extents for an index; you can afford to create it or let it grow via many small extents. Again, you don't need to have very precise sizing predictions if you plan to use small extents-you won't end up wasting too much space even if the last extent isn't very full, something you can't be sure of if you use large extents. If you want to work on the detailed sizing calculations, you can find Oracle's formulae in Appendix A of the Oracle8 Server Administrator's Guide. Reuse table-sizing scripts You may want to look at the sizing section in Chapter 7and review the scripts that compute sizing requirements for tables based on sample data. These scripts can be modified, if you want to use them, to estimate overall index size. SEE ALSO Get table-sizing details on

Creating an Index
In Chapter 18, "Using Indexes, Clusters, Caching, and Sorting Effectively," you learn what criteria help determine how useful an index would be in optimizing queries or other table access. The other main reasons to use an index were summarized at the start of this chapter. In this section it's assumed you've determined that you need a standard b*tree index on an existing table. You look at the syntax and how to use it to build an effective index. Most indexes you build will use the CREATE INDEX command. In Chapter 17 you can find out about indexes that Oracle builds automatically, if they're needed, to support certain integrity constraints. SEE ALSO For information on constraints that require indexes, The syntax for the CREATE INDEX command to build a standard b*tree index on a table is shown in Listing 8.1. Listing 8.1 Creating an index with the CREATE INDEX command 01: 02: 03: 04: 05: CREATE [UNIQUE] INDEX [index_schema.]index_name ON [table_schema.]table_name ( column_name [ASC][DESC] [,...] ) [parallel_clause] [NO[LOGGING]]

http://www.informit.com/content/0789716534/element_008.shtml (4 of 17) [26.05.2000 16:48:03]

informit.com -- Your Brain is Hungry. InformIT - Adding Segments for Different Types of Indexes From: Using Oracle8

06: 07: 08: 09:

[TABLESPACE tablespace_name] [NOSORT] [storage_clause] [space_utilization_clause]

Numbering of code lines Line numberings were included in Listing 8.1 and other code listings to make discussion about this code easier to reference. The numbers should not be included with any command-line commands, as part of any Oracle scripts, or within SQL statements. CREATE INDEX...ON on lines 1 and 2 are the required keywords for the command. On line 1, UNIQUE creates an index in which every entry must be different from every other entry; by default, an index is non-unique and allows duplicate entries. index_schema is the name of the owner of the index; by default, it's the user creating the index. Finally, index_name is the name given to the index. On line 2, table_schema is the name of the owner of the table on which the index is being built; by default, it's assumed to be in the schema of the user creating the index. Also on line 2, table_name is the name of the table on which the index is being created. column_name on line 3 is the name of the index's leading column. You can include up to 31 additional columns, as long as the total length of an entry is less than half of the Oracle block size for the database. Also on line 3, ASC and DESC are keywords provided for compatibility with standards; they have no impact on how the index is created. You can use only one of these two keywords per column, but you can apply either one to different columns in a composite index. Finally, [,...] indicates that you can include more than one column in the index, naming them in a comma-separated list. On line 4, parallel_clause is one of

Causes all access to index to be serialized Allows some parallel access Set the number of query slaves to be used in an instance to build index in parallel; only one format can be used per statement Set number of parallel server instances to be used when building index with intern-ode parallel operations; only one format can be used per statement On line 5 of Listing 8.1, LOGGING and NOLOGGING determine whether creation of the index and subsequent activities will be logged (LOGGING, the default) or not logged (NOLOGGING) into the redo logs. The additional activities subject to this setting are direct loads through SQL*Loader and direct-load INSERT commands. TABLESPACE tablespace_name on line 6 identifies the tablespace where the index will be created. By default, it's built in the default tablespace of the user creating the index. On line 7 of Listing 8.1, NOSORT is used to prevent a sort when the rows are stored in the table in ascending order by the index key. The CREATE INDEX command will fail if any row is out of order. By default, Oracle assumes that the rows aren't in order and sorts the indexed data. When to use CREATE INDEX's NOSORT option

http://www.informit.com/content/0789716534/element_008.shtml (5 of 17) [26.05.2000 16:48:03]

informit.com -- Your Brain is Hungry. InformIT - Adding Segments for Different Types of Indexes From: Using Oracle8

Oracle's tables, like all relational tables, aren't guaranteed to be stored in any specific order. For NOSORT to work when you're creating an index, the table must have been loaded by using a single process with no parallel operations, and with a source of data already sorted in the order of the indexed column(s). The rows can be entered manually, one row at a time with INSERT statements, or with SQL*Loader in conventional or direct mode. The index needs to be created following such a load, before any addition-al DML statements are issued against the table; the row order may not be preserved by such commands. The storage_clause on line 8 is as follows: STORAGE ( [INITIAL integer [K|M]] [NEXT integer [K|M]] [PCTINCREASE integer] [MINEXTENTS integer] [MAXEXTENTS [integer|UNLIMITED]] [FREELISTS integer] [FREELIST GROUPS integer] [BUFFER POOL [KEEP|RECYCLE|DEFAULT]] ) STORAGE is the required keyword. With no STORAGE clause, or for any optional storage components not included in the STORAGE clause, the value will be inherited from the tablespace's default settings. INITIAL integer is the size, in bytes, of the first extent. K and M change bytes to kilobytes or megabytes, respectively. NEXT integer is the size of the second extent. PCTINCREASE integer is the multiplier applied to the size of each subsequent extent following the second extent. MINEXTENTS integer is the number of extents built when the index is created. MAXEXTENTS integer and MAXEXTENTS UNLIMITED are the maximum number of extents allowed for the index, where you must provide a number or the keyword UNLIMITED, but not both. FREELISTS integer is the number of freelists assigned to the index; the default value is 1. FREELIST GROUPS integer is the number of freelist groups assigned to the index; the default value is 1. BUFFER POOL defines the default buffer pool for the index blocks. Only one option is allowed:

q q q q

q q

q q

Order of columns in a composite index If you're building a composite index and aren't sure which columns will be referenced most often, create the index with the columns ordered from the most to the least discriminating. For example, an index on the honorific (Mr., Ms., Dr., and so on), the first initial, and the last name columns: Put the last name first (many different values), the first initial (26 values), and the honorific (a handful of values). KEEP RECYCLE DEFAULT Assigns blocks to the kept buffer pool Assigns blocks to the recycle buffer pool Assigns blocks to neither pool; this is the default if you don't include the BUFFER POOL option

The space_utilization_clause on line 9 is as follows:

Reserves space for new entries on a block (default is 10) Sets number of transaction slots reserved in each block (default is 2)

http://www.informit.com/content/0789716534/element_008.shtml (6 of 17) [26.05.2000 16:48:03]

informit.com -- Your Brain is Hungry. InformIT - Adding Segments for Different Types of Indexes From: Using Oracle8

Sets maximum number of transaction slots that can be created in a block (default is 255) You don't have to name the columns in a composite index in the same order as they're defined in the table-nor do you, as this implies, have to use adjacent columns. Your best option is to include the most queried column first-a query that provides a value for the leading column of a composite index can use the index to find the required rows, even if the indexed columns aren't referenced by the query. You should include the other columns in descending order of frequency of reference for the same reason-Oracle can use as many of the leading columns of an index as are identified in an SQL statement's WHERE clause. SEE ALSO Find out more about parallel operations, For information about SQL*Loader and its options, including direct and parallel direct loads,

Unique Indexes
You should rarely need to create an index as UNIQUE. A unique constraint, rather than a unique index, should be used to enforce uniqueness between rows. A unique constraint does use an index, and you can create one for this purpose as discussed in Chapter 17, but it doesn't have to be a unique index. A composite unique index will ensure only that the set of values in each entry is distinct from all other values. It will allow the same value to be repeated in a column multiple times, as long as at least one other column has a different value from any existing entry. An entry will be stored in the index if at least one column has a non-NULL value. A NULL value in a column will be treated as potentially containing the same value as another NULL value in that same column. Consequently, an entry containing one or more NULLs, but with all the same values in the non-NULL columns, would be considered in violation of the unique condition. A row with these characteristics therefore couldn't be stored. NULLs aren't considered for uniqueness A row with a NULL in the indexed column won't be recorded in the index, so a unique index won't prevent multiple rows with a NULL in the indexed column from being stored.

Index Sort Order


All indexes are created with the values in ascending order, regardless of the ASC or DESC options settings on each column. Internally, indexes can be scanned in either direction, due to forward and backward pointers between the leaf blocks. ASC and DESC index options The options for ordering indexes in ascending or descending order via the ASC and DESC keywords are included only for compliance with SQL standards. Retrieval from an Oracle index in either direction is equally efficient.

Parallel Operations on Indexes


Unless your index is partitioned, only the creation of the index can be processed in parallel. Subsequent access to the index will be done through a single serial server. Without the parallel clause in the CREATE INDEX command, the creation will also be serial, regardless of the underlying table's definition. The PARALLEL clause's INSTANCES option is of significance only if you're running Oracle Parallel Server. Check availability of parallel server processes Parallel operations on indexes, as well as any other Oracle objects, will occur only if your instance (or instances) is running with a sufficient number of parallel server processes available. This is deter-mined by the PARALLEL_ MIN_SERVERS and PARALLEL_MAX_SERVERS parameter settings in your initialization file. Parallel creation is usually much faster than serial creation, particularly if you have multiple CPUs on your database server and the table is striped across multiple disk drives. For this reason, you may want to build all your indexes with an appropriate degree of parallelism (and number of instances, if applicable). In particular, you should consider creating any indexes needed to enforce constraints with the CREATE INDEX command so that you can include the PARALLEL clause. You don't have the option of defining a parallel operation when an index is built automatically as a result of creating or enabling the constraint. SEE ALSO

http://www.informit.com/content/0789716534/element_008.shtml (7 of 17) [26.05.2000 16:48:04]

informit.com -- Your Brain is Hungry. InformIT - Adding Segments for Different Types of Indexes From: Using Oracle8

Read about partitioned indexes and their parent segments,

Logging Index Operations


You can speed the processing of your CREATE INDEX commands by not creating redo log entries. After you're done, however, you should try to back up the tablespace where the index is stored-recovery following a media failure can't reconstruct the index correctly without the normally created redo log entries. In addition, you should repeat the backup any time you perform one of the other operations that won't be logged due to the NOLOGGING setting, for exactly the same reason. Of course, in some cases you may not realize that a subsequent operation isn't generating redo entries. Index activities affected by the NOLOGGING option In addition to the creation of the index, the two other activities that aren't logged when the NOLOGGING option is in place are direct loads with SQL*Loader and direct load inserts with the INSERT...SELECT command. To avoid any problems with future unlogged changes to your index, you might want to turn on logging after its creation. You can still create the index without logs-use the ALTER INDEX command after it's built. If you don't include the LOGGING or NOLOGGING option in the CREATE INDEX command, the index will be built in, and future activities will use logging mode for non-partitioned indexes. Partitioned indexes will acquire their logging mode from the parent segment.

Index Tablespaces
By default, indexes are created in your default tablespace just like any other segment. Chapter 5explains why you should consider using different tablespaces for tables and for the indexes on them. Your default tablespace is typically the one where you build your tables, which probably isn't where you want your indexes. It's important to consider where you really want any new index to be created and, if necessary, to include the TABLESPACE clause in your CREATE INDEX command.

Index Space-Utilization Parameters


The space-utilization parameters behave differently for indexes than for tables, which can be confusing: q PCTFREE sets a percentage of each block aside for use by new index entries. Unlike a table, however, this space is reserved only during the initial index creation. After that, the block will fill up as much as possible-even though each new entry will exceed PCTFREE-because index entries must be stored in a specific order. If the initial load filled every block to its capacity, later insertions into the range of values on any block would cause a block split. To reduce the number of such splits, you should reserve space for new entries in indexes where they're likely to occur. Generally, new entries are likely to be needed between other values in alphanumeric data, such as people's names, telephone numbers, or street addresses. Numeric primary keys generally consist of an increasing numeric value, so all new entries will go to the end of the index. Such indexes don't need to reserve free space for new entries into existing blocks. q Because an index block has to contain entries within a certain value range, deletions from the block don't necessarily make the block available for new entries. The space will be reused only if a new entry has a value that allows it to fit between the dropped entry's adjacent entries. No PCTUSED space-utilization parameter is required to control this behavior. q INITRANS has to have a minimum value of 2, not 1 (as in a table). The second transaction slot is required if the block has to be split. This operation occurs during the user's transaction, which is assigned to one transaction slot and is performed by a second transaction initiated by SYS. SYS is assigned to another slot.

Creating Indexes at the Right Time


Unless you have a table that has all its rows physically stored in the order of an index that you need (in which case you can use the NOSORT option), the indexed columns will have to be sorted as part of the index creation. The only way you can guarantee that rows will be in the required order is if you sort them first and load them in that order, after which you prevent any further DML against the table until the index is built. After an index is built, any changes to the underlying table that affect the indexed columns will be automatically included in the index. This adds to the overhead of processing the statements. Unless you need the index to speed the processing or to support concurrent query processing, you should consider dropping it during periods

http://www.informit.com/content/0789716534/element_008.shtml (8 of 17) [26.05.2000 16:48:04]

informit.com -- Your Brain is Hungry. InformIT - Adding Segments for Different Types of Indexes From: Using Oracle8

of heavy activity against the table. Drop indexes that have frequent block splits Indexes prone to block splitting are good candidates for dropping during periods of heavy DML-either because they're old and have no more free space for new rows on many of their blocks, or because their entries are frequently updated (which results in a DELETE of the old entry and an INSERT of the new, so that ordering is preserved). They can be recreated immediately or, if performance doesn't suffer too much without them, when the underlying table is no longer so busy.

Monitoring Space Usage


As mentioned earlier, the space freed in an index when an entry is deleted isn't available to any new entry added. The "hole" left by the deleted entry is in a specific location in the sort order of the index. For a simple analogy, consider a file cabinet. The drawers correspond to an index leaf block and the folders within are akin to index entries, assuming that they're filed in alphabetical order. If you discard a folder labeled BROWN, it leaves room in the file cabinet between the folders labeled BRONX and BRUIN. If you need to file a new folder for GREEN, you won't place it in the drawer from where the BROWN folder was taken, but in the drawer with the other GR folders-say, between GRAY and GREY. However, a new folder for BROWNING could legitimately occupy the space vacated by the old BROWN folder. An index works in much the same way. Freed space can be reused only if the entry's value allows it to fit into the value range opened by the deleted entry. Index leaf entries aren't dropped only when their corresponding row is dropped from the table, but also when the row is updated, if the update affects the indexed column(s). An update is equivalent to taking a folder from the theoretical file cabinet, changing the label (say, from BROWN to WHITE), and then refiling it. The folder is obviously not going to be of any use filed in its original position; it has to be inserted in the correct position among the other folders beginning with W. As far as the index is concerned, the old entry is removed and a new one is created. The space left behind by the removed entry is subject to reuse in the same way as space left by an actual record deletion. As you can possibly see, an index on a table that undergoes a lot of updates and deletions can end up with lots of free space that may or may not become reusable, depending on the nature of new records being inserted into the table. One type of table and corresponding index will very likely generate lots of free space on the leaf blocks. This is a table with a primary key based on some form of sequence number, either obtained from an Oracle sequence generator or created by the application. Consider an ORDERS table that has an index on the ORDER_NUMBER column and where each new order is given the next highest unused number. Orders are added to the table when they're received and dropped after having been filled and the invoice paid. Over time the leaf blocks with the older orders start emptying out as they're filled and paid. In most cases they will become completely empty, and then they can be recycled for use with new orders. What if some are from delinquent customers who have never paid the bill, or for standing orders that remain in the system for years? The leaf blocks containing such order numbers will have to be maintained, even if there's just one order on them. The free space on these blocks can't be reused by new orders because they will have numbers much higher than the range reserved for these blocks. Consider order numbers 1023 and 3345 on two separate index blocks (on blocks 22 and 28, for example). The blocks between these blocks-23 through 27-may have already been emptied and recycled with higher order numbers. However, the pointers between adjacent leaf blocks that allow an index to be scanned in ascending or descending order will "connect" blocks 22 and 28. Any order outside the range of 1023-3345 therefore can't be placed on either block because it would be out of logical order. It's possible for an index to gradually become burdened by a large number of almost empty blocks. Such an index wastes disk space and results in slow access for values on the sparsely populated blocks. This chapter discusses an index option (reverse-key indexing) that can help you avoid this type of situation. You should regularly evaluate the space usage-and hence, efficiency-of your indexes. Over time you'll learn which indexes are prone to space problems and which aren't, due to their underlying tables being relatively static or the indexed values being random and, hence, able to reuse space. You can use an ANALYZE command's option to see how well or how poorly an index is using its space, particularly with respect to deleted entries. Using the INDEX_STATS view

http://www.informit.com/content/0789716534/element_008.shtml (9 of 17) [26.05.2000 16:48:04]

informit.com -- Your Brain is Hungry. InformIT - Adding Segments for Different Types of Indexes From: Using Oracle8

INDEX_STATS is a temporary view created by the ANALYZE INDEX...VALIDATE STRUCTURE command. It exists only for the duration of the session that created it and can contain information for only one index at a time. If you execute a second ANALYZE INDEX...VALIDATE STRUCTURE command in your session, the INDEX_STATS view will contain only information about the second index analyzed. Only the session that created it can see the INDEX_STATS view, so another user-or even your userid connected to a different session-won't see the view. When you log out of your session, the view is removed and you'll need to rerun the ANALYZE command to recreate it. The following command will populate the INDEX_STATS view with statistical information about the index: ANALYZE INDEX [schema.]index_name VALIDATE STRUCTURE Of particular interest are the columns LF_ROWS and DEL_LF_ROWS, which show the current number of entry slots in leaf blocks. They also show the total number of entries deleted from leaf blocks, respectively, and LF_ROWS_LEN and DEL_LF_ROWS_LEN, which show the total number of bytes associated with these entries. A rule of thumb is that when the number of, or space used by, deleted entries is greater than 20 percent of total entries, you should consider rebuilding the index to reclaim the space. However, you should also check the PCT_USED column. If this is 80 percent or more-an average amount of space you can expect to see used in a typical index-you may not want to incur the work of rebuilding the index. You should continue to monitor it, however, to ensure that the statistics stay in the preferred ranges. Monitoring the number of keys (leaf entries) versus the number of levels in the b*tree over time is another measure you can apply to an index to see if it's becoming overburdened with deleted entry space. The latter is shown under the HEIGHT column of INDEX_STATS and shouldn't change if the total number of index entries stays the same. If the index height keeps increasing, it indicates that more branch block levels are being added. This behavior is to be expected if more entries are being stored in the leaf blocks. If, on the other hand, the additional branch levels are supporting the same number (or thereabouts) of leaf entries, the structure is becoming top-heavy with branch blocks. This occurs when branch blocks are being maintained for partially emptied leaf blocks. Build a history of index statistics If you want to keep a record of an index's statistics over time, you can issue the command CREATE TABLE table_name AS SELECT * FROM index_stats, where you use a date or sequence number as well as the index name as part of table_name. Remember to do this before you end your session or issue another ANALYZE command. The statistics in INDEX_STATS aren't used by the Oracle optimizers, and the existence of the view in a session won't change the default optimizer behavior. This behavior is different from the statistics collected with the ANALYZE INDEX...COMPUTE STATISTICS or ANALYZE INDEX...ESTIMATE STATISTICS commands. However, if you use these commands, you'll see slightly different values in the DBA_INDEXES than you see in INDEX_STATS. This is because some values in the latter may reflect rows that have been deleted, whereas the values in the former are based only on the current index contents. SEE ALSO To learn more about the use of statistics by Oracle's optimizer,

Rebuilding an Index
You may have a number of reasons to rebuild an index. Here are some of the more common reasons: q To reclaim storage taken by deleted entries q To move the index to a different tablespace q To change the physical storage attributes q To reset space utilization parameters You can use two methods to make these changes. The first is to drop the index and recreate it by using the CREATE INDEX command discussed earlier in this chapter. The second is to use the REBUILD option of the ALTER INDEX command. Each method has its advantages and disadvantages, which Table 8.1 summarizes. Table 8.1 Alternatives for recreating an index Drop and Rebuild: Can rename index Can change between UNIQUE and non-UNIQUE Use REBUILD Option: Can't rename index Can't change between UNIQUE and non-UNIQUE

http://www.informit.com/content/0789716534/element_008.shtml (10 of 17) [26.05.2000 16:48:04]

informit.com -- Your Brain is Hungry. InformIT - Adding Segments for Different Types of Indexes From: Using Oracle8

Can change between b*tree and bitmap Needs space for only one copy of the index Requires a sort if data exists Index temporarily unavailable for queries Can't use this method if index is used to support a constraint

Can't change between b*tree and bitmap Needs space for duplicate index temporarily Never requires a sort Index remains available for queries Can use this method for an index supporting a constraint

The biggest advantage to dropping and recreating your index is that you don't need space for the original index and the new index to exist at the same time. However, you can't assume that this means the process has no overhead. To build the new version of the index, Oracle will have to perform a sort of the column data in all the existing rows. This will require memory and, for large tables, may even require the use of temporary segments on disk. The sort process will also be time-consuming for a large table, and the index will be unavailable between the time it's dropped and new version is ready. As you may guess, the sort space overhead and time for the work to be done are the biggest disadvantages to this approach. If you elect to use the drop-and-recreate approach to rebuild an index, you need to issue the DROP INDEX command (discussed in the next section), and then use the appropriate CREATE INDEX command (discussed earlier in this chapter). If the index is now being used to enforce a constraint, you can't use this method-Oracle will prevent you from successfully dropping the index. Of course, you can temporarily disable the constraint as long as you're prepared to deal with any changes to the table that may prevent you from reenabling again. The biggest advantages and disadvantages to the rebuild option are exactly the opposite of those for the drop-and-create option. When rebuilding an index, Oracle simply reads the leaf block information, which is already in sorted order, to create the new index. When the index is built, it drops the old copy automatically. Because a sort isn't required, the process is relatively fast. Also, it leaves the original index in place for use by queries that may occur concurrently with the rebuild. The disadvantage is that you must have room in your database for the current and new versions of the index simultaneously. This shouldn't be a problem if you're moving the index to a different tablespace, but may be a deterrent if you need to use the same tablespace. The syntax of the ALTER INDEX...REBUILD command is as follows: ALTER INDEX index_name REBUILD [parallel_clause] [NO[LOGGING]] [TABLESPACE tablespace_name] [NO[REVERSE]] [storage_clause] [space_utilization