Oracle DBA Interview Questions and Answers
January 28, 2014TSRComments off
Dear friends, i have collected these from net different sources and verifying each
every question and separating them based on the topic. Mostly in another 3 days of
time, you will all the questions in proper format. ALL THE BEST.
Resume based.
Tell me about yourself? Your role as a DBA? Your Day to Day activities?
General:-
How to check Oracle database version?
SQL> Select * from v$version;
Installation:
What are the steps to install oracle on Linux system? List two kernel parameter
that effect oracle installation?
1) configure the storage. create the folders as per the requirements.
2) configure the required users(oracle) and groupds(oinstall, dba).
3) configure the kernal parameters(SHMMAX, SHMMNI, SHMALL)
4) Give the necessary permissions on the folders to Oracle user.
5) Create the graphical environment.(vnc etc..)
6) Start the installation and provide the inputs.
7) Run the root.sh
Why do you run orainstRoot and ROOT.SH once you finalize the Installation?
orainstRoot.sh needs to be run to change the Permissions and groupname to 770 and
to dba.
Root.sh (ORACLE_HOME location) needs to be run to create a ORATAB in /etc/oratab or
/opt/var/oratab in Solaris and to copy dbhome, oraenv and coraenv to
/usr/local/bin.
orainstRoot.sh
[root@oracle11g ~]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory to 770.
Changing groupname of /u01/app/oraInventory to dba.
The execution of the script is complete
root.sh
[root@oracle11g ~]# /u01/app/oracle/product/11.1.0/db_1/root.sh
Running Oracle 11g root.sh script�
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/oracle/product/11.1.0/db_1
Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin �
Copying oraenv to /usr/local/bin �
Copying coraenv to /usr/local/bin �
Creating /etc/oratab file�
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
Finished product-specific root actions.
For Oracle installation on unix/linux, we will be prompted to run a script
�root.sh� from the oracle inventory directory.this script needs to run the first
time only when any oracle product is installed on the server.
It creates the additional directories and sets appropriate ownership and
permissions on files for root user.
File types and its extension when OMF is enabled
File type Extension Default location (when created with OMF)
Pfile :
ORA
C:\oracle\product\10.2.0\admin\orcl\pfile
Spfile:
ORA
C:\oracle\product\10.2.0\db_1\database
Control file:
CTL
C:\oracle\product\10.2.0\oradata\orcl
Redo log file:
LOG
C:\oracle\product\10.2.0\oradata\orcl
Archive log file:
LOG
C:\oracle\product\10.2.0\flash_recovery_area\ORCL\ARCHIVELOG
Data file:
DBF
C:\oracle\product\10.2.0\oradata\orcl
Alert log files:
LOG
C:\oracle\product\10.2.0\admin\orcl\adump
Trace log files:
TRC
C:\oracle\product\10.2.0\admin\orcl\udump|bdump|cdump
Password file:
ORA
C:\oracle\product\10.2.0\db_1\database
Tuning:-
What are bind variables?
With bind variable in SQL, oracle can cache queries in a single time in the SQL
cache area. This avoids a hard parse each time, which saves on various locking and
latching resource we use to check object existence and so on.
What happens when we fire SQL statement in Oracle?
It goes three stages.
a) parsing.
b) optimization.
c) execution.
d)fetch(in case of select).
<PENDING> Still i have to write further on this. will provide shortly.
Architecture
What is difference between oracle SID and Oracle service name?
Oracle SID is the unique name that uniquely identifies your instance/database where
as the service name is the TNS alias can be same or different as SID.
What is the difference between data block/extent/segment?
A data block is the smallest unit of logical storage available in oracle database.
Extent is nothing but collection of one more contiguous data blocks. These
groupings of contiguous data blocks are called extents. All the extents that an
object takes when grouped together are considered the segment of the database
object.
What is the difference between PGA and UGA?
When you are running dedicated server then process information stored inside the
process global area (PGA) and when you are using shared server then the process
information stored inside user global area (UGA).
What is SGA?
The SGA is memory structure and a part of instance. this will be created while
starting the instance. SGA is dedicated to only one oracle instance. All oracle
process uses the SGA to hold information. The SGA is used to store incoming data
and internal control information that is needed by the database. Following are the
key components of SGA
a) DB cache(db_cache_size),
b) Shared pool (shared_pool_size)
c) log buffer (log_buffer).
Define structure of shared pool component of SGA?
Shared pool contain two major area:
Library cache (parse SQL statement, cursor information and execution plan),
data dictionary cache (contain cache, user account information, privilege user
information, segments and extent information,
What is the difference between SMON and PMON processes?
SMON (System Monitor) performs recovery after instance failure, monitor temporary
segments and extents; clean temp segment, coalesce free space. It is mandatory
process of DB and starts by default.
PMON (Process Monitor) failed process resources. In shared server architecture
monitor and restarts any failed dispatcher or server process. It is mandatory
process of DB and starts by default.
What is a system change number (SCN)?
SCN is a value that is incremented whenever database changes are made. For further
updates on SCN, please refer to my article in this blog.
What is the main purpose of �CHECKPOINT� in oracle database? How do you
automatically force the oracle to perform a checkpoint?
A checkpoint is a database event, which synchronize the database blocks in memory
with the datafiles on disk. It has two main purposes: To establish a data
consistency and enable faster database Recovery.
The following are the parameter that will be used by DBA to adjust time or interval
of how frequently its checkpoint should occur in database.
LOG_CHECKPOINT_TIMEOUT = 3600; # Every one hour
Specifies (in seconds) the amount of time that has passed since the incremental
checkpoint at the position where the last write to the redo log (sometimes called
the tail of the log) occurred. This parameter also signifies that no buffer will
remain dirty (in the cache) for more than integer seconds. Basically it ensures
that check point occurs for every LOG_CHECKPOINT_TIMEOUT interval. this is in
seconds.
LOG_CHECKPOINT_INTERVAL = 1000; # number of OS blocks.
Specifies the frequency of checkpoints in terms of the number of redo log file
blocks that can exist between an incremental checkpoint and the last block written
to the redo log. This number refers to physical operating system blocks, not
database blocks.
What is the use of large pool, which case you need to set the large pool?
large pool is an optional memory structure. You need to set large pool if you are
using: MTS (Multi thread server) or RMAN Backups. Large pool prevents RMAN & MTS
from competing with other sub system(shared pool memory) for the same memory. RMAN
uses the large pool for backup & restore when you set the DBWR_IO_SLAVES or
BACKUP_TAPE_IO_SLAVES parameters to simulate asynchronous I/O. If neither of these
parameters is enabled, then Oracle allocates backup buffers from local process
memory rather than shared memory. Then there is no use of large pool. i have one
more url for this article in blog. Please refer to the same.
What does database do during the mounting process?
While mounting the database oracle reads the data from controlfile which is used
for verifying physical database files during sanity check during the open state.
What are logfile states?
To see the status of the redo log file there are two dynamic views are there.
A)V$LOG
B)V$LOGFILE
A)V$LOG:
V$LOG displays redo log file information from the control file.
Status:
1) UNUSED � Online redo log has never been written to. This is the state of a redo
log that was just added, or just after a RESETLOGS, when it is not the current redo
log.
2) CURRENT � Current redo log. This implies that the redo log is active. The redo
log could be open or closed.
3) ACTIVE � Log is active but is not the current log. It is needed for crash
recovery. It may be in use for block recovery. It may or may not be archived.
4) CLEARING � Log is being re-created as an empty log after an ALTER DATABASE CLEAR
LOGFILE statement. After the log is cleared, the status changes to UNUSED.
5) CLEARING_CURRENT � Current log is being cleared of a closed thread. The log can
stay in this status if there is some failure in the switch such as an I/O error
writing the new log header.
6) INACTIVE � Log is no longer needed for instance recovery. It may be in use for
media recovery. It might or might not be archived
What is log switch?
The point at which oracle ends writing to one online redo log file and begins
writing to another is called a log switch. You can force the log switch.
ALTER SYSTEM SWITCH LOGFILE;
Explain Oracle Architecture?
Oracle server:
a DBMS that provides an open, comprehensive, integrated approach to information
management,Consists of an Instance and a database.
Oracle database:
a collection of data that is treated as a unit,Consists of Datafiles, Control
files, Redo log files. (optional param file, passwd file, archived log)
Oracle Instance:
a means to access an Oracle database,always opens one and only one database
accesses it and consists of memory structures and background process.
Instance memory Structures:
System Global Area (SGA):
Allocated at instance startup, and is a fundamental component of an Oracle
Instance.
SGA Memory structures:
Includes Shared Pool, Database Buffer Cache, Redo Log Buffer among others.
Shared Pool :
Consists of two key performance-related memory structures Library Cache and Data
Dictionary Cache.
Library Cache:
Stores information about the most recently used SQL and PL/SQL statements and
enables the sharing of commonly used statements.
Data Dictionary Cache :
Stores collection of the most recently used definitions in the database Includes db
files, tables, indexes, columns etc. Improves perf. During the parse phase, the
server process looks at the data dictionary for information to resolve object names
and validate access.
Database Buffer Cache:
Stores copies of data blocks that have been retrieved from the datafiles.
Redo Log Buffer :
Records all changes made to the database data blocks, Primary purpose is recovery.
Redo entries contain information to reconstruct or redo changes.
User process:
Started at the time a database User requests connection to the Oracle server. Run
at client location where db user initiates the connection. Interaction with the
Oracle server, does not interact directly with the Oracle server. It happens
through Oracle listener initially.
Server process:
Connects to the Oracle Instance and is Started when a user establishes a session
based on the architecture. Fulfills calls generated and returns results.
Each server process has its own non-shared PGA when the process is started.
Server Process Parses and run SQL statements issued through the application, Reads
necessary data blocks from datafiles on disk into the buffer cache of the SGA, if
the blocks are not already present in buffer cache. Uses LRU algorithm to manage
the cache.
Program Global Area (PGA):
Memory area used by a single Oracle server process. Allocated when the server
process is started, deallocated when the process is terminated and used by only one
process.
Used to process SQL statements and to hold logon and other session information.
Background processes:
Started when an Oracle Instance is started.
Background Processes Maintains and enforces relationships between physical and
memory structures
There are two types of database processes:
1. Mandatory background processes
2. Optional background processes
Mandatory background processes:
� DBWn, PMON, CKPT, LGWR, SMON
Optional background processes:
� ARCn, LMDn, RECO, CJQ0, LMON, Snnn, Dnnn, Pnnn, LCKn, QMNn
DBWn writes when:
� Checkpoint occurs
� Dirty buffers reach threshold
� There are no free buffers
� Timeout occurs
� RAC ping request is made
� Tablespace OFFLINE
� Tablespace READ ONLY
� Table DROP or TRUNCATE
� Tablespace BEGIN BACKUP
Log Writer (LGWR) writes:
� At commit
� When 1/3rd full
� When there is 1 MB of redo
� Every 3 seconds
� Before DBWn writes
System Monitor (SMON) Responsibilities:
� Instance recovery
� Rolls forward changes in redo logs
� Opens database for user access
� Rolls back uncommitted transactions
� Coalesces free space
� Deallocates temporary segments.
Process Monitor (PMON) Cleans up after failed processes by:
� Rolling back the transaction
� Releasing locks
� Releasing other resources
� Restarting dead dispatchers
Checkpoint (CKPT) Responsible for:
� Signaling DBWn at checkpoints
� Updating datafile headers with checkpoint information
� Updating control files with checkpoint information
Archiver (ARCn)
� Optional background process
� Automatically archives online redo logs when ARCHIVELOG mode is set
� Preserves the record of all changes made to the database
What is oraInventory ?
oraInventory is repository (directory) which store/records oracle software products
& their oracle_homes location on a machine. This Inventory now a days in XML format
and called as XML Inventory where as in past it used to be in binary format &
called as binary Inventory.
There are basically two kind of inventories,
One is Local Inventory (also called as Oracle Home Inventory) and other is
Global Inventory (also called as Central Inventory).
What is Local Inventory ?
Inventory inside each Oracle Home is called as local Inventory or oracle_home
Inventory. This Inventory holds information to that oracle_home only.
What is Global Inventory ?
Global Inventory holds information about Oracle Products on a Machine. These
products can be various oracle components like database, oracle application server,
collaboration suite, soa suite, forms & reports or discoverer server . This global
Inventory location will be determined by file oraInst.loc in /etc (on Linux) or
/var/opt/oracle (solaris). If you want to see list of oracle products on machine
check for file inventory.xml under ContentsXML in oraInventory Please note if you
have multiple global Inventory on machine check all oraInventory directories)
You will see entry like
HOME NAME=�ORA10g_HOME� LOC=�/u01/oracle/10.2.0/db� TYPE=�O� IDX=�1?/
What is Oracle Home Inventory?
Oracle home inventory or local inventory is present inside each Oracle home. It
only contains information relevant to a particular Oracle home. This file is
located in the following location:
$ORACLE_HOME/inventory
It contains the following files and folders:
� Components File
� Home Properties File
� Other Folders
Can I have multiple Global Inventory on a machine ?
Quite common questions is that can you have multiple global Inventory and answer is
YES you can have multiple global Inventory but if your upgrading or applying patch
then change Inventory Pointer oraInst.loc to respective location. If you are
following single global Inventory and if you wish to uninstall any software then
remove it from Global Inventory as well.
What to do if my Global Inventory is corrupted ?
No need to worry if your global Inventory is corrupted, you can recreate global
Inventory on machine using Universal Installer and attach already Installed oracle
home by option
-attachHome
./runInstaller -silent -attachHome -invPtrLoc $location_to_oraInst.loc
ORACLE_HOME=�Oracle_Home_Location� ORACLE_HOME_NAME=�Oracle_Home_Name�
CLUSTER_NODES=�{}�
Specific to 11g:
Oracle Database 11g New Feature for DBAs?
1) Automatic Diagnostic Repository [ADR]
2) Database Replay
3) Automatic Memory Tuning
4) Case sensitive password
5) Virtual columns and indexes
6) Interval Partition and System Partition
7) The Result Cache
8) ADDM RAC Enhancements
9) SQL Plan Management and SQL Plan Baselines
10) SQL Access Advisor & Partition Advisor
11) SQL Query Repair Advisor
12) SQL Performance Analyzer (SPA) New
13) DBMS_STATS Enhancements
14) The Result Cache
15) Total Recall (Flashback Data Archive)
Note: The above are only top new features, there are other features as well
introduced in 11g which will be included subsequently
What is RESULT Cache?
SQL Result Cache is another cache in the SGA, just like buffer cache or program
global area. When you execute a query with the hint result_cache, Oracle performs
the operation just like any other operation but the results are stored in the SQL
Result Cache. Subsequent invocations of the same query do not actually go to the
table(s) but get the results from the cache. The size of the cache is determined by
several initialization parameters:
Parameter Description
result_cache_max_size Maximum size of the result cache (5M for 5 MB, for
example). If you set this to 0, result caching will be completely turned off.
result_cache_max_result Specifies the percentage of result_cache_max_size that any
single result can use
result_cache_mode If set to FORCE, all the queries are cached if they fit in the
cache. The default is MANUAL, which indicates that only queries with the hint will
be cached.
result_cache_remote_expiration Specifies the number of minutes that a cached
result that accesses a remote object will remain valid. The default is 0.
I am still working on above result cache but further details you refer to below
URL.
http://www.oracle.com/technetwork/articles/sql/11g-caching-pooling-088320.html
11G Backgroung Processes?
The following process are added in 11g as new background processes.
1: dbrm DB resource manager
2: dia0 Diagnosability process
3: fbda Flashback data archiver process
4: vktm Virtual Timekeeper
5: w000 Space Management Co-ordination process
6: smc0 Space Manager process
NOTE : The above six are mandatory processes.
But 11g has 56 new processes added which can be queried using
If any one of these 6 mandatory background processes is killed/not running, the
instance will be aborted ?
Background processes are started automatically when the instance is started.
Mandatory background processes are DBWn, LGWR, CKPT, SMON, PMON, and RECO. All
other processes are optional, will be invoked if that particular feature is
activated.
If any one of these 6 mandatory background processes is killed/not running, the
instance will be aborted.
Any issues related to background processes should be monitored and analyzed from
the trace files generated and the alert log.
Literal Vs. Bind Variables?
Select * from emp where dept=10; what level should i configure for better
performance Where Clause �. Literal or Bind Variables? Expalin?
What is a Baseline?
Incarnation? Explain in detail? Where the incarnation information will be stored?
Hard Parse Vs. Soft Parse?
What is semaphores, semaphores?
What is latch?
What is Enqueue?
What is SGA_TARGET and SGA_MAX_SIZE ?
SGA_MAX_SIZE is the largest amount of memory that will be available for the SGA in
the instance and it will be allocated from memory. You do not have to use it all,
but it will be potentially wasted if you set it too high and don�t use it. It is
not a dynamic parameter. Basically it gives you room for the Oracle instance to
grow.
SGA_TARGET is actual memory in use by the current SGA. This parameter is dynamic
and can be increased up to the value of SGA_MAX_SIZE.
SGA_MAX_SIZE and SGA_TARGET both are the parameter are used to change the SGA SIZE.
SGA_MAX_SIZE sets the maximum value for sga_target.
SGA_TAGET is 10G feature used to change the sga size dynamically .it specifies the
total amount of SGA memory available to an instance.
this feature is called Automatic Shared Memory Management. With ASMM, the
parameters java_pool_size, shared_pool_size, large_pool_size and db_cache_size are
affected.
SGA_MAX_SIZE & SGA_TARGET
http://maxwellmiranda.wordpress.com/2009/09/17/sga_max_size-sga_targe/
SGA_MAX_SIZE sets the overall amount of memory the SGA can consume but is not
dynamic.
The SGA_MAX_SIZE parameter is the max allowable size to resize the SGA Memory area
parameters. If the SGA_TARGET is set to some value then the Automatic Shared Memory
Management (ASMM) is enabled, the SGA_TARGET value can be adjusted up to the
SGA_MAX_SIZE parameter, not more than SGA_MAX_SIZE parameter value.
I.e. If SGA_MAX_SIZE=4GB and SGA_TARGET=2GB, later period of time, if you want you
can resize your SGA_TARGET parameter to the value of SGA_MAX_SIZE i.e. 4GB, you
can�t resize the SGA_TARGET value to more than 4GB.
It is significant that SGA_TARGET includes the entire memory for the SGA, in
contrast to earlier releases in which memory for the internal and fixed SGA was
added to the sum of the configured SGA memory parameters. Thus, SGA_TARGET gives
you precise control over the size of the shared memory region allocated by the
database. If SGA_TARGET is set to a value greater than SGA_MAX_SIZE at startup,
then the latter is bumped up to accommodate SGA_TARGET
Do not dynamically set or unset the SGA_TARGET parameter. This should be set only
at startup.
SGA_TARGET is a database initialization parameter (introduced in Oracle 10g) that
can be used for automatic SGA memory sizing.
SGA_TARGET provides the following:
� Single parameter for total SGA size
� Automatically sizes SGA components
� Memory is transferred to where most needed
� Uses workload information
� Uses internal advisory predictions
� STATISTICS_LEVEL must be set to TYPICAL
� SGA_TARGET is dynamic
� Can be increased till SGA_MAX_SIZE
� Can be reduced till some component reaches minimum size
� Change in value of SGA_TARGET affects only automatically sized components
If I keep SGA_TARGET =0 then what will happen ?
Disable automatic SGA tuning by setting sga_target=0
Disable ASMM by setting SGA_TARGET=0
http://www.orafaq.com/wiki/SGA_target
SGA_TARGET is a database initialization parameter (introduced in Oracle 10g) that
can be used for automatic SGA memory sizing.
Default value 0 (SGA auto tuning is disabled)
What happens when you run ALTER DATABASE OPEN RESETLOGS ?
The current online redo logs are archived, the log sequence number is reset to 1,
new database incarnation is created, and the online redo logs are given a new time
stamp and SCN.
The reason to do the open the database with the resetlogs is that after doing an
incomplete recovery , the data files and control files still don�t come to the same
point of the redo log files. And as long as the database is not consistent within
all the three file-data, redo and control, you can�t open the database. The
resetlogs clause would reset the log sequence numbers within the log files and
would start them from 0 thus enabling you to open the database but on the cost of
losing all what was there in the redo log files.
In what scenarios open resetlogs required ?
An ALTER DATABASE OPEN RESETLOGS statement is required,
1.after incomplete recovery (Point in Time Recovery) or
2.recovery with a backup control file.
3. recovery with a control file recreated with the reset logs option.
http://onlineappsdba.com/index.php/2009/09/11/oracle-database-incarnation-open-
resetlogs-scn/
http://web.njit.edu/info/limpid/DOC/backup.102/b14191/osrecov009.htm
Whenever you perform incomplete recovery or recovery with a backup control file,
you must reset the online logs when you open the database. The new version of the
reset database is called a new incarnation..
Difference between RESETLOGS and NORESETLOGS ?
http://oracleappstechnology.blogspot.in/2008/05/difference-between-resetlogs-
and.html
After recover database operation, open the database with: ALTER DATABASE OPEN
[NO]RESETLOGS
NORESETLOGS:
The NORESETLOGS option does not clear the redo log files during startup and the
online redo logs to be used for recovery. Only used in scenario where MANUAL
RECOVERY is started, CANCEL is used, and then RECOVER DATABASE is started.
RESETLOGS:
CAUTION: Never use RESETLOGS unless necessary.
Once RESETLOGS is used then the redo log files cannot be used and any completed
transactions in those redo logs are lost!!
Before using the RESETLOGS option take an offline backup of the database.
What is SCN (System Change Number) ?
The system change number (SCN) is an ever-increasing value that uniquely identifies
a committed version of the database at a point in time. Every time a user commits a
transaction Oracle records a new SCN in redo logs.
Oracle uses SCNs in control files datafile headers and redo records. Every redo log
file has both a log sequence number and low and high SCN. The low SCN records the
lowest SCN recorded in the log file while the high SCN records the highest SCN in
the log file.
What is Database Incarnation ?
Database incarnation is effectively a new �version� of the database that happens
when you reset the online redo logs using �alter database open resetlogs;�.
Database incarnation falls into following category Current, Parent, Ancestor and
Sibling
i) Current Incarnation : The database incarnation in which the database is
currently generating redo.
ii) Parent Incarnation : The database incarnation from which the current
incarnation branched following an OPEN RESETLOGS operation.
iii) Ancestor Incarnation : The parent of the parent incarnation is an ancestor
incarnation. Any parent of an ancestor incarnation is also an ancestor incarnation.
iv) Sibling Incarnation : Two incarnations that share a common ancestor are sibling
incarnations if neither one is an ancestor of the other.
How to view Database Incarnation(History of Database)?
SQL> select * from v$database_incarnation;
RMAN>LIST INCARNATION;
What are new background processes are there in 10g and 11g
http://oracleinstance.blogspot.in/2009/11/oracle-background-processes.html
http://www.dba-oracle.com/concepts/background_processes_concepts.htm
Categories: COREDBA
ALL ABOUT SHMMAX SHMMNI SHMALL kernal parameter
January 28, 2014TSRComments off
Configuring SHMMAX and SHMALL for Oracle in Linux
SHMMAX and SHMALL are two key shared memory parameters that directly impact�s the
way by which Oracle creates an SGA. Shared memory is nothing but part of Unix IPC
System (Inter Process Communication) maintained by kernel where multiple processes
share a single chunk of memory to communicate with each other.
While trying to create an SGA during a database startup, Oracle chooses from one of
the 3 memory management models a) one-segment or b) contiguous-multi segment or c)
non-contiguous multi segment. Adoption of any of these models is dependent on the
size of SGA and values defined for the shared memory parameters in the linux
kernel, most importantly SHMMAX.
So what are these parameters � SHMMAX and SHMALL?
SHMMAX is the maximum size of a single shared memory segment set in �bytes�.
silicon:~ # cat /proc/sys/kernel/shmmax
536870912
SHMALL is the total size of Shared Memory Segments System wide set in �pages�.
silicon:~ # cat /proc/sys/kernel/shmall
1415577
The key thing to note here is the value of SHMMAX is set in �bytes� but the value
of SHMMALL is set in �pages�.
What�s the optimal value for SHMALL?
As SHMALL is the total size of Shard Memory Segments System wide, it should always
be less than the Physical Memory on the System and should be greater than sum of
SGA�s of all the oracle databases on the server. Once this value (sum of SGA�s) hit
the limit, i.e. the value of shmall, then any attempt to start a new database (or
even an existing database with a resized SGA) will result in an �out of memory�
error (below). This is because there won�t be any more shared memory segments that
Linux can allocate for SGA.
ORA-27102: out of memory
Linux-x86_64 Error: 28: No space left on device.
So above can happen for two reasons. Either the value of shmall is not set to an
optimal value or you have reached the threshold on this server.
Setting the value for SHMALL to optimal is straight forward. All you want to know
is how much �Physical Memory� (excluding Cache/Swap) you have on the system and how
much of it should be set aside for Linux Kernel and to be dedicated to Oracle
Databases.
For e.g. Let say the Physical Memory of a system is 6GB, out of which you want to
set aside 1GB for Linux Kernel for OS Operations and dedicate the rest of 5GB to
Oracle Databases. Then here�s how you will get the value for SHMALL.
Convert this 5GB to bytes and divide by page size. Remember SHMALL should be set in
�pages� not �bytes�.
So here goes the calculation.
Determine Page Size first, can be done in two ways. In my case it�s 4096 and that�s
the recommended and default in most cases which you can keep the same.
myhost:~ # getconf PAGE_SIZE
4096
or
myhost:~ # cat /proc/sys/kernel/shmmni
4096
Convert 5GB into bytes and divide by page size, I used the linux calc to do the
math.
myhost:~ # echo �( 5 * 1024 * 1024 * 1024 ) / 4096 � | bc -l
1310720.00000000000000000000
Reset shmall and load it dynamically into kernel
myhost:~ # echo �1310720� > /proc/sys/kernel/shmall
myhost:~ # sysctl �p
Verify if the value has been taken into effect.
myhost:~ # sysctl -a | grep shmall
kernel.shmall = 1310720
Another way to look this up is
silicon:~ # ipcs -lm
�� Shared Memory Limits ���
max number of segments = 4096 /* SHMMNI */
max seg size (kbytes) = 524288 /* SHMMAX */
max total shared memory (kbytes) = 5242880 /* SHMALL */
min seg size (bytes) = 1
To keep the value effective after every reboot, add the following line to
/etc/sysctl.conf
echo �kernel.shmall = 1310720� >> /etc/sysctl.conf
Also verify if sysctl.conf is enabled or will be read during boot.
myhost:~ # chkconfig boot.sysctl
boot.sysctl on
If returns �off�, means it�s disabled. Turn it on by running
myhost:~ # chkconfig boot.sysctl on
boot.sysctl on
What�s the optimal value for SHMMAX?
Oracle makes use of one of the 3 memory management models to create the SGA during
database startup and it does this in following sequence. First Oracle attempts to
use the one-segment model and if this fails, it proceeds with the next one which�s
the contiguous multi-segment model and if that fails too, it goes with the last
option which is the non-contiguous multi-segment model.
So during startup it looks for shmmax parameter and compares it with the
initialization parameter *.sga_target. If shmmax > *.sga_target, then oracle goes
with one-segment model approach where the entire SGA is created within a single
shared memory segment.
But the above attempt (one-segment) fails if SGA size otherwise *.sga_target >
shmmax, then Oracle proceeds with the 2nd option � contiguous multi-segment model.
Contiguous allocations, as the name indicates are a set of shared memory segments
which are contiguous within the memory and if it can find such a set of segments
then entire SGA is created to fit in within this set.
But if cannot find a set of contiguous allocations then last of the 3 option�s is
chosen � non-contiguous multi-segment allocation and in this Oracle has to grab the
free memory segments fragmented between used spaces.
So let�s say if you know the max size of SGA of any database on the server stays
below 1GB, you can set shmmax to 1 GB. But say if you have SGA sizes for different
databases spread between 512MB to 2GB, then set shmmax to 2Gigs and so on.
Like SHMALL, SHMMAX can be defined by one of these methods..
Dynamically reset and reload it to the kernel..
myhost:~ # echo �536870912� > /proc/sys/kernel/shmmax
myhost:~ # sysctl �p � Dynamically reload the parameters.
Or use sysctl to reload and reset ..
myhost:~ # sysctl -w kernel.shmmax=536870912
To permanently set so it�s effective in reboots�
myhost:~ # echo �kernel.shmmax=536870912� >> /etc/systctl.conf
Install doc for 11g recommends the value of shmmax to be set to �4GB � 1byte� or
half the size of physical memory whichever is lower. I believe �4GB � 1byte� is
related to the limitation on the 32 bit (x86) systems where the virtual address
space for a user process can only be little less than 4GB. As there�s no such
limitation for 64bit (x86_64) bit systems, you can define SGA�s larger than 4
Gig�s. But idea here is to let Oracle use the efficient one-segment model and for
this shmmax should stay higher than SGA size of any individual database on the
system.
Categories: Oracle DBA
oracle database smart flash cache
January 23, 2014TSRComments off
It is around 02:15 am IST, not getting sleep, Suddenly wakeup and saw today visits
counts for my somireddy.wordpress.com. Felt very very happy you know why�.today
visits count is 216(One more new mile stone for my blog.) offcourse this is very
low count but this more me. i think i am doing wrong here�every day i am checking
how much count. i think should start thinking in different way like i need to think
on improving the blog rather than checking visit count frequently. Once i started
concentrating on topics to be added and quality of the blog, definitely visit count
will be increased. Visters count is by product of the contents of the site. As
Srikrishna said rightly said, you do what ever you can, but result is by product of
our efforts, no need to check the visits count every now and then. Reason why i am
specifying this point is most of the people in the house we will check on the
results and we will not focus on efforts. Hence there is a gap. hmmmmmmmmmmmmmm�too
much boaring class. let me come back to topic�.
Categories: 11g
How to generate the trace for a particular error?
January 22, 2014TSRComments off
Last week we have upgraded one database from 10.2.0.2. to 11.2.0.3.4. Post upgrade
we started getting ora-01008 for a particular module, and we tried all the possible
ways to simulate in the TEST environment, but we could not find any details.
Finally support requested us to generate the trace for the PRODUCTION server
itself. I felt this is very very useful and thought of sharing the same to you all.
alter system set events �1008 trace name ERRORSTACK level 3�;
� Once the trace is generated which will be under the trace directory you can set
this event off as below :-
alter system set events �1008 trace name ERRORSTACK off�;
Note:- Here 1008 is nothing but the ORA-01008 which you are getting.
Best thing about above is:-
1) It will not generate trace for all session, it generate only when error occurs.
Hence no need to worry about huge trace files.
2) Also performance wise also no impact since this is just a trigger fires at once.
Categories: DB_LEVEL
Oracle RAC Load balancing and Failover
January 21, 2014TSRComments off
LOAD BALANCING in RAC:
The Oracle RAC system can distribute the load over all nodes in the cluster. This
feature called as load balancing.
There are two methods of load balancing
1.Client load balancing
2.Server load balancing
1.Client Load Balancing
Client Load Balancing distributes new connections among Oracle RAC nodes so that no
server is overloaded with connection requests and it is configured at net service
name level by providing multiple descriptions in a description list or multiple
addresses in the TNS entry. For example, if connection fails over to another node
in case of failure, the client load balancing ensures that the redirected
connections are distributed among the other nodes in the RAC.
Configure Client-side connect-time load balancing by setting LOAD_BALANCE=ON in the
corresponding client side TNS entry.
TESTRAC =
(DESCRIPTION =
(ADDRESS_LIST=
(LOAD_BALANCE = ON)
(ADDRESS = (PROTOCOL = TCP)(HOST = TESTRAC1-VIP)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = TESTRAC2-VIP)(PORT = 1521))
)
(CONNECT_DATA = (SERVICE_NAME = testdb.selectstarfrom.com))
)
Basically client side load balancing will be done by client Tnsnames Parameter:
LOAD_BALANCE.
The (load_balance=yes) instructs SQLNet to progress through the list of listener
addresses in the address_list section of the net service name in a random sequence.
When set to OFF, instructs SQLNet to try the addresses sequentially until one
succeeds.
2.Server Load Balancing:-
With server-side load balancing, the listener directs a connection request to the
best instance currently providing the service. Init parameter remote_listener
should be set. When set, each instance registers with the TNS listeners running on
all nodes within the cluster.
There are two types of server-side load balancing:
Load Based � Server side load balancing redirects connections by default depending
on node load. This id default.
Session Based � Session based load balancing takes into account the number of
sessions connected to each node and then distributes the connections to balance the
number of sessions across the different nodes.
Server Load Balancing distributes processing workload among Oracle RAC nodes. It
divides the connection load evenly between all available listeners and distributes
new user session connection requests to the least loaded listener(s) based on the
total number of sessions which are already connected. Each listener communicates
with the other listener(s) via each database instance�s PMON process.
Configure Server-side connect-time load balancing feature by setting
REMOTE_LISTENERS initialization parameter of each instance to a TNS name that
describes list of all available listeners.
TESTRAC_LISTENERS =
(DESCRIPTION =
(ADDRESS_LIST =(ADDRESS = (PROTOCOL = TCP)(HOST = TESTRAC1)(PORT = 1521)))
(ADDRESS_LIST =(ADDRESS = (PROTOCOL = TCP)(HOST = TESTRAC2)(PORT = 1521))))
)
Set *.remote_listener= TESTRAC_LISTENERS� initialization parameter in the
database�s shared SPFILE and add TESTRAC_LISTENERS� entry to the TNSNAMES.ORA file
in the Oracle Home of each node in the cluster.
Once you configure Server-side connect-time load balancing, each database�s PMON
process will automatically register the database with the database�s local listener
as well as cross-register the database with the listeners on all other nodes in the
cluster. Now the nodes themselves decide which node is least busy, and then will
connect the client to that node.
FAILOVER in RAC:
The Oracle RAC system can protect against failures caused by O/S or server crashes
or hardware failures. When a node failure occurs in RAC system, the connection
attempts can fail over to other surviving nodes in the cluster this feature called
as Failover.
There are two methods of failover
1. Connection Failover
2. Transparent Application Failover (TAF)
1. Connection Failover
This is done by client Tnsnames Parameter: FAILOVER
The (failover=on) enables clients to connect to another listener if the initial
connection to the first listener fails. Without connect-time failover, Oracle Net
attempts a connection with only one listener.
Enable client-side connect-time Failover by setting FAILOVER=ON in the
corresponding client side TNS entry.
TESTRAC =
(DESCRIPTION =
(ADDRESS_LIST=
(LOAD_BALANCE = ON)
(FAILOVER = ON)
(ADDRESS = (PROTOCOL = TCP)(HOST = TESTRAC1-VIP)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = TESTRAC2-VIP)(PORT = 1521))
)
(CONNECT_DATA = (SERVICE_NAME = testdb.selectstarfrom.com))
)
If LOAD_BALANCE is set to on then clients randomly attempt connections to any
nodes. If client made connection attempt to a down node, the client needs to wait
until it receives the information that the node is not accessible before trying
alternate address in ADDRESS_LIST.
2. Transparent Application Failover (TAF)
Transparent Application Failover (TAF) is a feature of the Oracle Call Interface
(OCI) driver at client side. It enables the application to automatically reconnect
to a database, if the database instance to which the connection is made fails. In
this case, the active transactions roll back.
Tnsnames Parameter: FAILOVER_MODE
e.g (failover_mode=(type=select)(method=basic))
Failover Mode Type can be Either SESSION or SELECT.
Session failover will have just the session to failed over to the next available
node. With SELECT, the select query will be resumed.
TAF can be configured with just server side service settings by using dbms_service
package.
If connection failure occurs after a connection is established, the connection
fails over to other surviving nodes. Any uncommitted transactions are rolled back
and server side program variables and session properties will be lost. In some case
the select statements automatically re-executed on the new connection with the
cursor positioned on the row on which it was positioned prior to the failover.
TESTRAC =
(DESCRIPTION =
(LOAD_BALANCE = ON)
(FAILOVER = ON)
(ADDRESS = (PROTOCOL = TCP)(HOST = TESTRAC1-VIP)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = TESTRAC1-VIP)(PORT = 1521))
(CONNECT_DATA =
(SERVICE_NAME = testdb.selectstarfrom.com)
(FAILOVER_MODE = (TYPE = SELECT)(METHOD = BASIC)(RETRIES = 180)(DELAY = 5))
)
)
Categories: RAC
Interview questions_DUMP1
January 21, 2014TSRComments off
I have been working as a DBA for last 10 years and i have been interviewing many
candidates and i have attended couple interviews some time back. Today i have
decided to prepare a interview question bank so that it will be usefull for many
DBA aspirants.
following are the questions which i have collected from net and placing them in
this page. however i will sort all these questions based on the topic and will
place them in different section. you can refer them there itself.
1- You have written in your CV that As a DBA u can Test a Backup, How?
Regularly test the backup & recovery srtategy, becaue it shows the integrity of the
backups, the validity of the backup and recovery method, and the reduction of the
problems before the occur in a production database.
Testing the backup and recovery plan ensures:
That the backup and recovery methods are sound
Integrity of backups
Ensures that the backup and recovery strategy meets business needs.
It minimizes problems before they occur in a production environment.
It ensures that personnel can react quickly and effectively in case any errors
arise, avoiding a crisis situation.
2- You have written in your CV, that you can monitor physical and logical backup,
how?
http://www.oracle.com/technology/deploy/availability/htdocs/BR_Overview.htm
A backup is a representative copy of data. This copy can include important parts of
a database such as the control file, redo logs, and datafiles. A backup protects
data from application error and acts as a safeguard against unexpected data loss,
by providing a way to restore original data. Backups are divided into physical
backups and logical backups. Physical backups are copies of physical database
files. The phrase �backup and recovery� usually refers to the transfer of copied
files from one location to another, along with the various operations performed on
these files.
In contrast, logical backups contain data that is exported using SQL commands and
stored in a binary file. Oracle records both committed and uncommitted changes in
redo log buffers. Logical backups are used to supplement physical backups.
Restoring a physical backup means reconstructing it and making it available to the
Oracle server. To recover a restored backup, data is updated using redo records
from the transaction log. The transaction log records changes made to the database
after the backup was taken.
http://www.oracle.com/technology/deploy/availability/htdocs/BR_Overview.htm
Types of Backup
There are two kinds of database backups available to us physical backups and
logical backups.
Physical Backup is an actual physical copy of the files of the database copied from
one location to another.
Logical Backup is a copy of the data in the database but not a copy of the physical
files.
Cold Backup
A cold backup is a physical backup.
During a cold backup the database is closed and not available to users. All files
of the database are copied (image copy). The datafiles do not change during the
copy so the database is in sync upon restore.
Used when: Service level allows for some down time for backup
Hot Backup
A hot backup is a physical backup. In a hot backup the database remains open and
available to users. All files of the database are copied (image copy). There may be
changes to the database as the copy is made and so all log files of changes being
made during the backup must be saved too. Upon a restore, the changes in the log
files are reapplied to bring the database in sync.
Used when: A full backup of a database is needed Service level allows no down time
for the backup
Logical Backup
A logical backup is an extract of the database. All SQL statements to create the
objects and all SQL statements to populate the objects are included in the extract.
Oracle provides a utility export, to create the extract. A partner utility, import,
is used to bring the data back into the database.
A logical backup can be done at the table, schema(or proxy owner), or database
level. That is, we can extract only a list of specified tables, a list of specified
schemas or the full database.
Used to:
Move or archive a database
Move or archive a table(s)
Move or archive a schema(s)
Verify the structures in the database.
3- You have written in your CV, that you can Rebuild Index to rectify segment
fragmentation. (Coalesce)
Index: An index is a tree structure that allows direct access to a row in table.
Rebuild: An index contains deleted entries and should be rebuild, such a case of
index on Order number of an Orders tables, where completed orders are deleted & new
orders with higher numbers are added.
ALTER INDEX orders_index REBUILD
TABLESPACE index02;
Above command moves an index to a different TableSpace.
Improve space utilization by removing deleted entries.
COALESCE: If you encounter index fragmentation then you can rebuild or coalesce.
Coalescing an index is a block rebuild that is performed online.
I.e. Merging B-tree index leaf blocks that can be freed for reuse.
ALTER INDEX hr.employees_idx COALESE;
�������������������-
When one must rebuild Indexes, read the at the forums.oracle.com at below thread.
http://forums.oracle.com/forums/thread.jspa?threadID=580451&tstart=0
4- You has written in your CV, that you can Automatic Schedule a logical backup.
how?
http://rakeshocp.blogspot.com/2007/09/batch-file-scripts.html
Visit above blog page for Automatic Scheduling of logical backup.
5- HVM, when it will be decreased? How can I do it? (Truncate table)
High water mark: The high water mark is the boundary between used and unused space
in a segment. As requests for new free blocks that cannot be satisfied by existing
free lists are received, the block to which the high water mark points becomes a
used block, and the high water mark is advanced to the next block. In other words,
the segment space to the left of the high water mark is used, and the space to the
right of it is unused.
The high water mark is divides a segment into used blocks free blocks
Blocks below the high water mark (used blocks) have at least once contained data.
This data might have been deleted.
Since Oracle knows that blocks beyond the high water mark don�t have data, it only
reads blocks up to the high water mark in a full table scan.
Oracle keeps track of the high water mark for a segment in the segment header.
Moving the high water mark
In normal DB operations, the high water mark only moves upwards, not downwards. The
exceptions being the truncate.
If there is a lot of free space below the high water mark, one might consider to
use alter table move statements. See On shrinking table sizes.
http://www.adp-gmbh.ch/blog/2005/july/20.html
truncate table table_name;
truncate cluster cluster_name;
A statement like delete from tablename deletes all records in the table, but it
does not free any space (see On table sizes). In order to free the space as well,
use truncate. However, a truncate can not be rolled back.
Truncate Table: Basically, a truncate statement resets the high water mark to its
initial position.
A truncate statement cannot be used on a synonym.
Ways for reseting high water mark.
� export/import
-CTAS (Create table as select)
� dbms_redefinition
6- Alert log file? What it contains?
Location Defined by BACKGROUND_DUMP_DEST
Alert log file should be the 1st place when diagnosing day-to-day operations or
errors.
Keeps record of
When DB was started & shutdown
List of all non default initialization parameters.
Startup of Background processes.
Log sequence number LGWR writing to.
Info abt log switch
Creation of tablespace & undo segments
Alter statement that has been issued.
Information abt errors messages ora-600 & extent errors.
7- Control file? What information it contains? When it is read?
Small binary file
Defines current state of physical database.
Required.
At mount state during startup
To operate the database
Linked to a single database.
Loss may require recovery.
Control file contains
Database Name & Identified
Timestamp of DB creation
Tablespace names
Names & location of data files & online redo log files
Current online redo log file information
Checkpoint information
Begin & end of undo segments
Redo log archive information
Backup Information
Read at the Mount mode of the database.
8- I have lost my Control File & don�t have any Backup; Can I start DB & operate?
How? (You create a control file in no mount mode.)
Yes, we can operate the database, follow the steps below:
Start instance if necessary.
Shut down instance if start failed.
Start the Instance in Mount mode.
Run the trace file script to recreate the control file.
Determine if full back up is required and perform one if necessary.
Ensure that instance is started & database is open.
http://www.orafaq.com/wiki/Control_file_recovery
http://download.oracle.com/docs/cd/B19306_01/server.102/b14231/control.htm#i1006360
Steps for Creating New Control Files
Complete the following steps to create a new control file.
1. Make a list of all datafiles and redo log files of the database.
If you follow recommendations for control file backups as discussed in �Backing Up
Control Files� , you will already have a list of datafiles and redo log files that
reflect the current structure of the database. However, if you have no such list,
executing the following statements will produce one.SELECT MEMBER FROM
V$LOGFILE;SELECT NAME FROM V$DATAFILE; SELECT VALUE FROM V$PARAMETER WHERE NAME =
�control_files�;
If you have no such lists and your control file has been damaged so that the
database cannot be opened, try to locate all of the datafiles and redo log files
that constitute the database. Any files not specified in step 5 are not recoverable
once a new control file has been created. Moreover, if you omit any of the files
that make up the SYSTEM tablespace, you might not be able to recover the database.
2. Shut down the database.
If the database is open, shut down the database normally if possible. Use the
IMMEDIATE or ABORT clauses only as a last resort.
3. Back up all datafiles and redo log files of the database.
4. Start up a new instance, but do not mount or open the database:
5. STARTUP NOMOUNT
6.
7. Create a new control file for the database using the CREATE CONTROLFILE
statement.
When creating a new control file, specify the RESETLOGS clause if you have lost any
redo log groups in addition to control files. In this case, you will need to
recover from the loss of the redo logs (step 8). You must specify the RESETLOGS
clause if you have renamed the database. Otherwise, select the NORESETLOGS clause.
8. Store a backup of the new control file on an offline storage device. See
�Backing Up Control Files� for instructions for creating a backup.
9. Edit the CONTROL_FILES initialization parameter for the database to indicate all
of the control files now part of your database as created in step 5 (not including
the backup control file). If you are renaming the database, edit the DB_NAME
parameter in your instance parameter file to specify the new name.
10. Recover the database if necessary. If you are not recovering the database, skip
to step 9.
If you are creating the control file as part of recovery, recover the database. If
the new control file was created using the NORESETLOGS clause (step 5), you can
recover the database with complete, closed database recovery.
If the new control file was created using the RESETLOGS clause, you must specify
USING BACKUP CONTROL FILE. If you have lost online or archived redo logs or
datafiles, use the procedures for recovering those files.
See Also:
Oracle Database Backup and Recovery Basics and Oracle Database Backup and Recovery
Advanced User�s Guide for information about recovering your database and methods of
recovering a lost control file
11. Open the database using one of the following methods:
o If you did not perform recovery, or you performed complete, closed database
recovery in step 8, open the database normally.
o ALTER DATABASE OPEN;
o If you specified RESETLOGS when creating the control file, use the ALTER DATABASE
statement, indicating RESETLOGS.
o ALTER DATABASE OPEN RESETLOGS;
The database is now open and available for use.
http://download.oracle.com/docs/cd/B19306_01/server.102/b14231/control.htm#i1006360
====================
Trace File:
When one of the Oracle background processes (such as dbwr, lgwr, pmon, smon and so
on ) encounter an exception, they will write a trace file.
These trace files are also recorded in the alert.log.
Trace files are also created for diagnostic dump events.
An ORA-00600 error also produces a trace file.
9- Mount- No Mount?
NOMOUNT: An oracle instance can be started in NOMOUNT stage only during database
creation or the recreation of control files.
MOUNT: To perform specific maintenance operations, don�t open the database.
� Renaming datafiles
� Enabling & disabling archive redo log file archiving options
� Performing full database recovery
10- SGA? It�s Components?
SGA: Shared Global Area is allocated at instance startup, and is fundamental
component of Oracle Instance.
Size defined by SGA_MAX_SIZE.
Components
� Shared Pool (SHARED_POOL_SIZE)
� Database Buffer Cache (DB_CACHE_SIZE)
� Redo Log Buffer (LOG_BUFFER)Additional Components
� Large Pool
� Java Pool
11- Checkpoint? Why Checkpoint? Where it�s recorded? When it�s recorded?
Database Checkpoint: Checkpoints are you used to determine where recovery should
start.
Checkpoint position � where recovery should start
Checkpoint queue � link list of dirty blocks
The position in the redo log where recovery should start is referred as to as the
checkpoint position.
Types of Checkpoint
1- Full check point
All dirty buffers are written
Shutdown normal, immediate, or transactional
Alter System Checkpoint
2- Incremental Checkpoint
3- Partial Checkpoint
Alter tablespace begin backup
Alter tablespace offline normal
When a checkpoint occurs, Oracle must update the headers of all datafiles to record
the details of the checkpoint. This is done by the CKPT process. The CKPT process
does not write blocks to disk; DBWn always performs that work.
Every 3 seconds CKPT records the RBA from the oldest entry to in the checkpoint
queue in the control file. This RBA represents the point in the redo log at which
instance recovery is to begin after an instance failure. It can do this because all
of the data blocks represented in prior redo records are guaranteed to have been
written do disk by DBWn
On the event on a log switch, does CKPT also write this information to the header
of the datafiles.
12- LGWR? When it writes?
LGWR performs sequential writes from the Redo Log Buffer to the Online redo Log
files under the following situations
At Commit
When the Redo Log Buffer is one-third full
When there is more than 1 MB of changes recorded in the Redo Log Buffer
Befoere DBW writes modified blocks
Every 3 seconds
13- Duties of DBA?
� To plan & Create Databases
� To Manage Database Availability
� To Manage Physical & Logical Structure
� To Manage Storage based on design
� To manage security
� Network Administration
� Backup & Recovery
� Database Tuning
14- What�s Statspack? How can I use Statspack?
STATSPACK: STATSPACK is a performance diagnosis tool, available since Oracle8i.
STATSPACK can be considered BSTAT/ESTAT�s successor, incorporating many new
features. STATSPACK is a diagnosis tool for instance-wide performance problems; it
also supports application tuning activities by providing data which identifies
high-load SQL statements. STATSPACK can be used both proactively to monitor the
changing load on a system, and also reactively to investigate a performance
problem.
UTLBSTAT � UTLESTAT
The BSTAT-ESTAT utilities capture information directly from the Oracle�s in-memory
structures and then compare the information from two snapshots in order to produce
an elapsed-time report showing the activity of the database. If we look inside
utlbstat.sql and utlestat.sql, we see the SQL that samples directly from the view:
V$SYSSTAT;
insert into stats$begin_stats select * from v$sysstat;
insert into stats$end_stats select * from v$sysstat;
How can I use Statspack
Create PERFSTAT Tablespace
The STATSPACK utility requires an isolated tablespace to obtain all of the objects
and data. For uniformity, it is suggested that the tablespace be called PERFSTAT,
the same name as the schema owner for the STATSPACK tables. It is important to
closely watch the STATSPACK data to ensure that the stats$sql_summary table is not
taking an inordinate amount of space.
SQL> CREATE TABLESPACE perfstat
DATAFILE �/u01/oracle/db/AKI1_perfstat.dbf� SIZE 1000M REUSE
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 512K
SEGMENT SPACE MANAGEMENT AUTO
PERMANENT
ONLINE;
Run catdbsyn.sql as SYS
Run dbmspool.sql as SYS
$ cd $ORACLE_HOME/rdbms/admin$ sqlplus �/ as sysdba�SQL> start spcreate.sql
Adjusting the STATSPACK Collection Level
STATSPACK has two types of collection options, level and threshold. The level
parameter controls the type of data collected from Oracle, while the threshold
parameter acts as a filter for the collection of SQL statements into the
stats$sql_summary table.
SQL> SELECT * FROM stats$level_description ORDER BY snap_level;
Level 0: This level captures general statistics, including rollback segment, row
cache, SGA, system events, background events, session events, system statistics,
wait statistics, lock statistics, and Latch information.
Level 5: This level includes capturing high resource usage SQL Statements, along
with all data captured by lower levels.
Level 6: This level includes capturing SQL plan and SQL plan usage information for
high resource usage SQL Statements, along with all data captured by lower levels.
Level 7: This level captures segment level statistics, including logical and
physical reads, row lock, itl and buffer busy waits, along with all data captured
by lower levels.
Level 10: This level includes capturing Child Latch statistics, along with all data
captured by lower levels.
15- Methods of Backup � (Cold & Hot Backup)?
A cold backup, also called an offline backup, is a database backup when the
database is offline and thus not accessible for updating. This is the safest way to
back up because it avoids the risk of copying data that may be in the process of
being updated. However, a cold backup involves downtime because users cannot use
the database while it is being backed up.
When system downtime must be minimized, a hot backup can provide an alternative to
the cold backup. A hot backup can be done even as users access the database, but
some method must be used to ensure that data being updated is noted and can be
copied when the update is complete.
16 -Why do you want to be DBA? Why Not a Developer?
I think being an Oracle DBA is really rewarding. It can be a highly demanding job,
but I feel that this is part of the exciting challenge of being a DBA. But I choose
this path for myself and I am really happy with it. It is an exciting and rewarding
job.
17- What is RECOVERY Catalog? Why we need it? Complete command/steps of creating
Recovery Catalog? How will it know about the Primary Database? -What
role/Privileges are given to user when he is connected to Recovery Catalog? -How
can I connect with RMAN? It�s Steps?
What is RECOVERY Catalog
� Recovery Catalog is schema that is created in a separate tablespace.
� RMAN propagates information about the database structure, archived redo log
files, and datafile copies into the recovery catalog from the control file of
target database.
http://www.idevelopment.info/data/Oracle/DBA_tips/RMAN_9i/RMAN9_4.shtml
To use RMAN, a recovery catalog is not necessary. Remember that RMAN will always
use the control file of the target database to store backup and recovery
operations. To use a recovery catalog, you will first need to create a recovery
catalog database and create a schema for it. The catalog (database objects) will be
located in the default tablespace of the schema owner. Please note that the owner
of the catalog cannot be the SYS user.
The recovery catalog database should be created on a different host, on different
disks, and in a different database from the target databse you will be backing up.
If you do not, the benefits of using a recovery catalog are lost if you loose the
database and need to restore.
The first step is to create a database for the recovery catalog. For the purpose of
this example, I created an Oracle 9.2.0 database named CATDB. The database has the
following installed:
� You have access to the SYS password for the database.
� A temporary tablespace named TEMP already exists.
� A normal tablespace named TOOLS exists and will be used to store the recovery
catalog.
� The database is configured in the same way as all normal databases, for example,
catalog.sql and catproc.sql have been successfully run.
Now, let�s create the recovery catalog:
1. Start SQL*Plus and then connect with SYSDBA privileges to the database
containing the recovery catalog:
% sqlplus �sys/change_on_install as sysdba�
2. Create a user and schema for the recovery catalog:
3. SQL> CREATE USER rman IDENTIFIED BY rman
4. DEFAULT TABLESPACE tools
5. TEMPORARY TABLESPACE temp
6. QUOTA UNLIMITED ON tools;
7.
User created.
8. Grant the RECOVERY_CATALOG_OWNER role to the schema owner. This role provides
the user with privileges to maintain and query the recovery catalog: 9.
SQL> GRANT RECOVERY_CATALOG_OWNER TO rman;
10. Grant succeeded.
11. Grant other desired privileges to teh RMAN user:
12. SQL> GRANT CONNECT, RESOURCE TO rman;
13. Grant succeeded.
14. After creating the catalog owner you should now create the catalog itself by
using the CREATE CATALOG command within the RMAN interface. This command will
create the catalog in the default tablespace of the catalog owner. you will need to
connect to the database that will contain the catalog as teh catalog owner as
follows:
15. % rman catalog rman/rman@catdb
16.
17. Recovery Manager: Release 9.2.0.1.0 � Production
18.
19. Copyright (c) 1995, 2002, Oracle Corporation. All rights reserved.
20.
21. connected to recovery catalog databaserecovery catalog is not installed
22. Now, run the CREATE CATALOG command to create the catalog. Note that this
process can take several minutes to complete.
23. RMAN> create catalog;
24. recovery catalog created
Registering the Target Database
Before using RMAN using a recovery catalog, you will need to register the taget
database(s) in the recovery catalog. RMAN will obtain all information it needs to
register the target database from the database itself.
As long as each target database has a distinct DBID, you can register more than one
target database in the same recovery catalog. Each database registered in a given
catalog must have a unique database identifier (DBID), but not necessarily a unique
database name.
You can use either the command-line utilities provided by RMAN or the Oracle
Enterprise Manager GUI to register the target database. For the purpose of this
example, I will be using the command-line utilities. I will be registering a
database named TARGDB to a recovery catalog within a database named CATDB. The
target database must be either mounted or opened in order to register it.
% . oraenv
ORACLE_SID = [TARGDB] ? TARGDB
% rman target backup_admin/backup_admin catalog rman/rman@catdb
Recovery Manager: Release 9.2.0.1.0 � Production
Copyright (c) 1995, 2002, Oracle Corporation. All rights reserved.
connected to target database: TARGDB (DBID=2457750772)
connected to recovery catalog database
RMAN> register database;
database registered in recovery catalog
starting full resync of recovery catalog
full resync complete
18- RMAN Incremental Backups? What are Differential & Cumulative Backups?
Incremental backup is a backup that includes only those blocks that have changed
since the previous backup.
click the link below for details about Differential and Cumulative Backups.
http://download-uk.oracle.com/docs/cd/B19306_01/backup.102/b14191/rcmconc1005.htm
19- Write a statement/command for exporting all the objects of owner �HR�?
Exp user=hr/hr full=y direct=y
20- Direct=y
Determines whether to use direct or conventional path export. Direct path exports
bypass the SQL command, thereby enhancing performance
21- Standby Databases?
A standby database is a transactionally consistent copy of the primary database. A
standby database is initially created from a backup copy of the primary database.
Once created, Data Guard automatically maintains the standby database by
transmitting primary database redo data to the standby system and then applying the
redo logs to the standby database.
Similar to a primary database, a standby database can be either a single-instance
Oracle database or an Oracle Real Application Clusters database.
A standby database can be either a physical standby database or a logical standby
database:
Physical standby database
Provides a physically identical copy of the primary database, with on-disk database
structures that are identical to the primary database on a block-for-block basis.
The database schema, including indexes, are the same. A physical standby database
is kept synchronized with the primary database by recovering the redo data received
from the primary database.
Logical standby database
Contains the same logical information as the production database, although the
physical organization and structure of the data can be different. It is kept
synchronized with the primary database by transforming the data in the redo logs
received from the primary database into SQL statements and then executing the SQL
statements on the standby database. A logical standby database can be used for
other business purposes in addition to disaster recovery requirements. This allows
users to access a logical standby database for queries and reporting purposes at
any time. Thus, a logical standby database can be used concurrently for data
protection and reporting.
For more details about Stand By databases visit the link below.
http://download.oracle.com/docs/cd/B19306_01/server.102/b14239/concepts.htm
22- RAC?
Real Application Clusters harnesses the processing power of multiple interconnected
computers. Real Application Clusters software and a collection of hardware known as
a cluster unite the processing power of each component to create a robust computing
environment.
You can use Real Application Clusters to deliver high performance, increased
throughput, and high availability. Before deploying Real Application Clusters,
however, you should understand Real Application Clusters processing.
In Real Application Clusters environments, all active instances can concurrently
execute transactions against a shared database. Real Application Clusters
coordinates each instance�s access to the shared data to provide data consistency
and data integrity.
Harnessing the power of clusters offers obvious advantages. A large task divided
into subtasks and distributed among multiple nodes is completed sooner and more
efficiently than if you processed the entire task on one node. Cluster processing
also provides increased performance for larger workloads and for accommodating
rapidly growing user populations.
For more details visit the link below.
http://download-uk.oracle.com/docs/cd/B10501_01/rac.920/a96597/psintro.htm
23- Partitioning (List, Range, Hash)
More details: http://www.devarticles.com/c/a/Oracle/Partitioning-in-Oracle/
Partitioning provides facilities for implementing large, scalable applications.
Enables control over tables & indexes at a lower level of granularity than is
possible with basic enterprise edition.
CREATE TABLESPACE part1
DATAFILE �c:\temp\part01.dbf� SIZE 50
MBLOCKSIZE 8192
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 256K
SEGMENT SPACE MANAGEMENT AUTO
ONLINE;
CREATE TABLESPACE part2
DATAFILE �c:\temp\part02.dbf� SIZE 50M
BLOCKSIZE 8192
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 256K
SEGMENT SPACE MANAGEMENT AUTO
ONLINE;
CREATE TABLESPACE part3
DATAFILE �c:\temp\part03.dbf� SIZE 50M
BLOCKSIZE 8192
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 256K
SEGMENT SPACE MANAGEMENT AUTO
ONLINE;
CREATE TABLESPACE part4
DATAFILE �c:\temp\part04.dbf� SIZE 50M
BLOCKSIZE 8192
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 256K
SEGMENT SPACE MANAGEMENT AUTO
ONLINE;
HASH Partitioning
Hash Partitioning, which maps data to partitions based on a hashing algorithm,
evenly distributing data between the partitions. This is typically used where
ranges aren�t appropriate, i.e. customer number, product ID
Enables partitioning of data that does not lend itself to range or list
partitioning.To view the numbers Oracle uses for hashing:
SELECT program, sql_hash_value, prev_hash_value FROM gv$session;
CREATE TABLE hash_part (
prof_history_id NUMBER(10),
person_id NUMBER(10) NOT NULL,
organization_id NUMBER(10) NOT NULL,
record_date DATE NOT NULL,
prof_hist_comments VARCHAR2(2000))
PARTITION BY HASH (prof_history_id)
PARTITIONS 3
STORE IN (part1, part2, part3);
desc prof_hist
SELECT table_name, tablespace_name, partitioned
FROM user_tables;
desc user_tab_partitions
SELECT partition_name, tablespace_name
FROM user_tab_partitions;
�������������������������
LIST Partitioning
Explicitly controls how rows map to partitions. You do this by specifying a list of
discrete values for the partitioning key in the description for each partition.
CREATE TABLE list_part (
deptno NUMBER(10),
deptname VARCHAR2(20),
quarterly_sales NUMBER(10,2),
state VARCHAR2(2))
PARTITION BY LIST (state)
(PARTITION q1_northwest VALUES (�OR�, �WA�)
TABLESPACE part1,
PARTITION q1_southwest VALUES (�AZ�, �CA�, �NM�)
TABLESPACE part2,
PARTITION q1_northeast VALUES (�NY�, �VT�, �NJ�)
TABLESPACE part1,
PARTITION q1_southeast VALUES (�FL�, �GA�)
TABLESPACE part2,
PARTITION q1_northcent VALUES (�MN�, �WI�)
TABLESPACE part1,
PARTITION q1_southcent VALUES (�OK�, �TX�)
TABLESPACE part2);
SELECT table_name, tablespace_name, partitioned
FROM user_tables;
SELECT partition_name, tablespace_name, high_valueFROM user_tab_partitions;
INSERT INTO list_part VALUES (10, �A�, 1000, �OR�);
INSERT INTO list_part VALUES (20, �B�, 1000, �AZ�);
INSERT INTO list_part VALUES (10, �A�, 1000, �WA�);
INSERT INTO list_part VALUES (20, �B�, 1000, �WA�);
INSERT INTO list_part VALUES (10, �A�, 1000, �AZ�);
INSERT INTO list_part VALUES (20, �B�, 1000, �CA�);
COMMIT;
SELECT * FROM list_part;
SELECT * FROM list_part PARTITION(q1_northwest);
�������������������������
RANGE Partition
Range Partitioning, which maps data to partitions based on a range of column values
(usually a date column)
CREATE TABLE range_part (prof_history_id NUMBER(10),
person_id NUMBER(10) NOT NULL,
organization_id NUMBER(10) NOT NULL,
record_date DATE NOT NULL,
ph_comments VARCHAR2(200))
PARTITION BY RANGE (record_date) (
PARTITION yr0
VALUES LESS THAN (TO_DATE(�01-JAN-2000',�DD-MON-YYYY�))
TABLESPACE part1,
PARTITION yr1 VALUES LESS THAN (TO_DATE(�01-JAN-2001',�DD-MON-YYYY�)) TABLESPACE
part2,
PARTITION yr2 VALUES LESS THAN (TO_DATE(�01-JAN-2002',�DD-MON-YYYY�))
TABLESPACE part3,
PARTITION yr9 VALUES LESS THAN (MAXVALUE)
TABLESPACE part4);
SELECT table_name, tablespace_name, partitioned
FROM user_tables;
SELECT partition_name, tablespace_name, high_value
FROM user_tab_partitions;
24- If my application is slow, how can we make it efficient, faster? -How will you
tune if application is slow?
25- RMAN Backups?
RMAN 9i
http://download-uk.oracle.com/docs/cd/B10501_01/server.920/a96566/rcmintro.htm
26- ADDM.
ADDM: Automatic database diagnostic monitor. ADDM enables Oracle to diagnose its
own performance problems. For example, ADDM identifies the most resource intensive
SQL statements and passes that statement to the SQL tuning advisor.
ADDM automatically monitors the state of the database at short, regular intervals
(by default: 30 mins) which leads to database performance diagnostics. It promises
that you can forget all of your scripts that link the many v$ views. ADDM can be
run from Enterprise Manager or through a PL/SQL interface.
SQL tuning advisor In my last column, I focused on one of Oracle Database 10g�s
quick wins for DBAs�using the new SQL Tuning Advisor to quickly tune poorly
performing SQL statements�and touched only briefly on the new built-in diagnostic
engine,
ADDM details
http://www.oracle-base.com/articles/10g/AutomaticDatabaseDiagnosticMonitor10g.php
27- AWR.
AWR automatic workload repository looks periodically at the system performance (by
default every 60 minutes) and stores the information found (by default up to 7
days). This allows retrieving information about workload changes and database usage
patterns.
This information is the basis for all self-management decisions. For example, it is
thus possible to identify the SQL statements that have the
� larges CPU consumption
� most buffer gets
� disk reads
� most parse calls
� shared memory
The data is fed from MMON.
The information is stored in the sysaux tablespace.
28- Flashback? What is flashback in 10g?
With Oracle�s flashback, it�s possible to sort of travel back in time. That is,
flashback provides a means to see data that was altered through DDL operations and
the state of database objects that were changed with DML operations. Oracle
subcategorises flashback into three categories: � Flashback queryGet previous data
with the as of clause of a select statement. See Flashback query example 1.
� Flashback version queryGet changes of data between a time interval using the
versions between clause of a select statement. See Flashback version query example
1.
� Flashback transaction queryFind changes of a specific transaction through the
flashback_transaction_query view. See Flashback transaction version query example
1. � Flashback Table
� Flashback dropUndrops a dropped table.
� flashback database In order to make flashback possible, Oracle uses undo.
http://www.adp-gmbh.ch/ora/admin/flashback/version_query_ex_1.html
29- What is command for restoring from recycle bin?
To reinstate the table, all you have to do is use the FLASHBACK TABLE command:
SQL> FLASHBACK TABLE RECYCLETEST TO BEFORE DROP;
PURGE RECYCLEBIN;But what if you want to drop the table completely, without needing
a flashback feature? In that case, you can drop it permanently using:
DROP TABLE RECYCLETEST PURGE;
This command will not rename the table to the recycle bin name; rather, it will be
deleted permanently, as it would have been pre-10g.
2nd interview.
30- Difference between User & Schema?
A schema is a collection of database objects. A schema is owned by a database user
and has the same name as that user. Schema objects are the logical structures that
directly refer to the database�s data. Schema objects include structures like
tables, views, and indexes. (There is no relationship between a tablespace and a
schema. Objects in the same schema can be in different tablespaces, and a
tablespace can hold objects from different schemas.)
31- Cluster key?
A cluster provides an optional method of storing table data. A cluster is made up
of a group of tables that share the same data blocks. The tables are grouped
together because they share common columns and are often used together. For
example, the emp and dept table share the deptno column. When you cluster the emp
and dept tables (see Figure 18-1), Oracle physically stores all rows for each
department from both the emp and dept tables in the same data blocks.
The cluster key is the column, or group of columns, that the clustered tables have
in common. You specify the columns of the cluster key when creating the cluster.
You subsequently specify the same columns when creating every table added to the
cluster. Each cluster key value is stored only once each in the cluster and the
cluster index, no matter how many rows of different tables contain the value.
32- RAC?
Go to Question/Answer 22
33- What are duties of DBA?
� To plan & Create Databases
� To Manage Database Availability
� To Manage Physical & Logical Structure
� To Manage Storage based on design
� To manage security
� Network Administration
� Backup & Recovery
� Database Tuning
� Import & Export
� Creating tables
� Creating Schemas and Objects
34- What is Difference among NOMOUNT & MOUNT & OPEN modes? When Database will be
available for read?
NOMOUNT
When you issue the startup command, the first thing the database will do is enter
the nomount stage. During the nomount stage, Oracle first opens and reads the
initialization parameter file (init.ora) to see how the database is configured. For
example, the sizes of all of the memory areas in Oracle are defined within the
parameter file.
After the parameter file is accessed, the memory areas associated with the database
instance are allocated. Also, during the nomount stage, the Oracle background
processes are started. Together, we call these processes and the associated
allocated memory the Oracle instance. Once the instance has started successfully,
the database is considered to be in the nomount stage. If you issue the startup
command, then Oracle will automatically move onto the next stage of the startup,
the mount stage.
The database can be configured on NOMOUNT mode
When Creating a Database
When re-creating Control File.
MOUNT MODE
When the startup command enters the mount stage, it opens and reads the control
file. The control file is a binary file that tracks important database information,
such as the location of the database datafiles.
In the mount stage, Oracle determines the location of the datafiles, but does not
yet open them. Once the datafile locations have been identified, the database is
ready to be opened.
Some forms of recovery require that the database be opened in mount stage.
OPEN MODE
The last startup step for an Oracle database is the open stage. When Oracle opens
the database, it accesses all of the datafiles associated with the database. Once
it has accessed the database datafiles, Oracle makes sure that all of the database
datafiles are consistent.
35- What is SGA? And what are its components?
System Global Area: The System Global Area (SGA) is a shared memory region that
contains data and control information for one Oracle instance. Oracle allocates the
SGA when an instance starts and deallocates it when the instance shuts down. Each
instance has its own SGA.
Users currently connected to an Oracle database share the data in the SGA. For
optimal performance, the entire SGA should be as large as possible (while still
fitting in real memory) to store as much data in memory as possible and to minimize
disk I/O.
SGA Components: The information stored in the SGA is divided into several types of
memory structures, including the database buffers, redo log buffer, and the shared
pool.
Database Buffer Cache of the SGA: Database buffers store the most recently used
blocks of data. The set of database buffers in an instance is the database buffer
cache. The buffer cache contains modified as well as unmodified blocks. Because the
most recently (and often, the most frequently) used data is kept in memory, less
disk I/O is necessary, and performance is improved.
Redo Log Buffer of the SGA: The redo log buffer stores redo entries�a log of
changes made to the database. The redo entries stored in the redo log buffers are
written to an online redo log, which is used if database recovery is necessary. The
size of the redo log is static.
Shared Pool of the SGA: The shared pool contains shared memory constructs, such as
shared SQL areas. A shared SQL area is required to process every unique SQL
statement submitted to a database. A shared SQL area contains information such as
the parse tree and execution plan for the corresponding statement. A single shared
SQL area is used by multiple applications that issue the same statement, leaving
more shared memory for other uses.
36- How can we allocate SGA? How we can know it proper size? How should we
estimate? How can we calculate size of SGA?
Check SGA Size Tips at following links.
http://www.dba-oracle.com/t_sga_sizing.htm
38- How can you perform Fragmentation in tables?
(No Comments)
check this link for 10g Shrink.
http://www.dbazine.com/blogs/blog-cf/chrisfoot/blogentry.2005-09-10.8045550726
39-Suppose, if You export table data and then import? Does it perform
fragmentation?
Yes..
40- Interviewer Asked from me:-�Tell me if you are DBA, u left the office at night,
and when u arrive at office in the morning�, what you will check in database?
(No Comments)
I have recently joined a Company, and we have a Checklist that I used to check in
the morning, that containts:
Primary Database:
* OracelServiceABC
* OracleListnerABC
* ArchivedLogSequence #
Backups:
* HOT BACKUPS
* Export Full Database
Standby Databases:
* OracelServiceABCYZ
* OracleListnerABCYZ
* ArchivedLogSequence #
* DataGuard Status
41- How can you schedule a job?
(No comments)
You can check in my blog for schedule a batch job.
http://rakeshocp.blogspot.com/2007/09/batch-file-scripts.html
42- RAID?
In computing, specifically computer storage, a Redundant Array of Independent
Drives (or Disks), also known as Redundant Array of Inexpensive Drives (or Disks),
(RAID) is an umbrella term for data storage schemes that divide and/or replicate
data among multiple hard drives. RAID can be designed to provide increased data
reliability or increased I/O performance, or both.
Follow the links below for RAID information
http://www.ecs.umass.edu/ece/koren/architecture/Raid/raidhome.html
http://en.wikipedia.org/wiki/RAID
http://www.dba-oracle.com/oracle_tips_raid_usage.htm
43- New feature of 10g for DBAz?
CHeck 10g new features for DBAz
http://www.oracle.com/technology/pub/articles/10gdba/index.html
44- What is Database? RDBMS? Rules?
DATABASE: A collection of information organized in such a way that a computer
program can quickly select desired pieces of data.
RDBMS: a type of database management system (DBMS) that stores data in the form of
related tables. Relational databases are powerful because they require few
assumptions about how data is related or how it will be extracted from the
database. As a result, the same database can be viewed in many different ways. An
important feature of relational systems is that a single database can be spread
across several tables. This differs from flat-file databases, in which each
database is self-contained in a single table.
Almost all full-scale database systems are RDBMS�s. Small database systems,
however, use other designs that provide less flexibility in posing queries.
Rules: Codd�s 12 rules are a set of 12 rules proposed by Edgar F. Codd, a pioneer
of the relational model for databases, designed to define what is required from a
database management system in order for it to be considered relational i.e and
RDBMS. visit the link below for Codd�s 12 rules.