Performance
Performance
Parallel Processing
Automated Degree of Parallelism
How it works
SQL Statement is hard parsed
And optimizer determines If estimated time Optimizer determines
statement greater than threshold ideal DOP
the execution plan
Actual DOP = MIN(default DOP, ideal DOP)
If estimated time less
than threshold
PARALLEL_MIN_TIME_THRESHOLD
Statement
executes in parallel
Statement
executes serially
Parallel Statement Queuing
How it works
SQL Statement is parsed If not enough parallel
statements and Oracle automatically servers available queue
determines DOP
64 32
64 16
32 128
16
FIFO Queue
When the required
number of parallel servers
If enough parallel become available the first
servers available stmt on the queue is
execute immediately dequeued and executed
8
128
In-Memory Parallel Execution
How it works
SQL Determine the size of the Table is a good candidate Fragments of Table are
statement table being looked at for In-Memory Parallel read into each node‟s
Execution buffer cache
Table is
Table is extremely small extremely Large
Only parallel server on
the same RAC node
Read into the buffer will access each
cache on any node Always use direct read fragment
from disk
SQL Tuning Automation in 11g
• Fully automatic SQL Tuning task
– Runs automatically in the Maintenance Window
– Identifies, ranks and tunes candidate SQL
– Leverages SQL Tuning Advisor
• Candidate high-load SQL automatically chosen by ADDM
– Excluded: parallel queries, DML/DDL, recursive, ad-hoc (infrequent)
• Tests and (optionally) implements SQL profiles
– Only implements significantly improved plans (3x)
– Time budgeted to avoid run-away plans
• New PL/SQL Package: DBMS_AUTO_SQLTUNE
• DBA can still invoke the advisor manually for reactive tuning
© 2011 Oracle Corporation 6
SQL Tuning Automation in 11g
• Configure Automatic SQL Tuning
© 2011 Oracle Corporation 7
Performance
Results Cache
Speeding Up Frequently-Run Queries
• Materialized Views: pre-compute and store results of
a query
– Fast: saves time SELECTing and aggregating data
– Persistent: require extra storage on disk
– Static: must be periodically refreshed
– Flexible: can take advantage of query rewrite
• What if we didn‟t need to look at the data at all?
PL/SQL Results Cache
• Caching reduces disk I/O by keeping frequently-used
data in memory:
– Library Cache, Buffer Cache, Dictionary Cache,…
• What is different about the PL/SQL Results Cache?
– Caches the results of frequently-executed queries
– Bypass the need to create a query plan entirely: if the query
matches, the results are immediately available
– Use is controllable by either a hint or parameter
– Results cache size, expiration, etc. are tunable by the DBA
– Cache is automatically invalidated if the base object is
modified
PL/SQL Results Cache in Pictures – Single Instance
Results are moved
to the Results Cache
in the SGA for even
Disk Blocks are
faster access
stored in Buffer
cache for faster
access.
First execution:
data is read from
disk
PL/SQL Results Cache in Pictures – RAC
Cache Synchronization
between instances can
provide major
performance benefits
Performance
Advisors
Diagnostics and Tuning Advisors
• Since 10.1
– SQL Access Advisor:
• Indexes
• Materialized Views
• Indexes on Materialized Views
• Requires a workload
– SQL Tuning Advisor
• Optimizer in Tuning Mode
• Create profiles for the statements
• Can operate on a single SQL statement
SQL Tuning
• Example: Results of SQL Tuning Advisor SQL Profile
© 2011 Oracle Corporation 15
SQL Tuning
• Example: Compare original to new explain plan
© 2011 Oracle Corporation 16
Tuning Enhancements in 11g
• Partition Advisor
– SQL Access Advisor will now recommend partitioning
schemes
• SQL Repair Advisor
– Diagnose and fix bad SQL
SQL Access Advisor: Partitioning
SQL Access Advisor: Recommendations
SQL Repair Advisor
SQL Repair Advisor
Click “View” to see
If “SQL Patch” is checked,
the recommendation
then a recommendation is
present
Performance
Statistics Gathering
Statistics Gathering Fundamentals
• Configure settings and parameters
select DBMS_STATS.GET_STATS_HISTORY_RETENTION from dual;
exec DBMS_STATS.ALTER_STATS_HISTORY_RETENTION(10); -- 10 days
DBMS_STATS.SET_TABLE_PREFS
DBMS_STATS.SET_SCHEMA_PREFS
© 2011 Oracle Corporation 23
Statistics Gathering Fundamentals
Table- Every 3hrs
SMON manually:
Monitoring ON SQL> exec DBMS_STATS.
FLUSH_DATABASE_MONITORING_INFO();
No# of DMLs in: DBA/USER_TAB_MODIFICATIONS:
Tables without statistics:
Table changed >10%:
EMPTY STALE
Automatic Job – once a day:
manually:
SQL> exec DBMS_STATS.GATHER_SCHEMA_STATS('SCOTT');
© 2011 Oracle Corporation 24
Statistics Gathering Fundamentals
Objects: Analysis order:
high
27%
54%
Priority
54%
27%
Statistics: EMPTY
Statistics: STALE
Statistics: OK low
© 2011 Oracle Corporation 25
Incremental Global Statistics in 11g
• Reduces time needed to gather statistics on
partitioned tables
Performance
Flash Cache
Oracle Database 11g Release 2
Reduce I/O bandwidth requirement with Flash Cache
• A transparent extension of the database buffer cache
using solid-state disk (SSD) technology
– SSD acts as a Level 2 cache (SGA is Level 1)
• Faster than disk (100x faster for reads)
• Cheaper than memory ($50 per gigabyte)
• Large capacity (hundreds of gigabytes per flash disk)
• Fewer drives and better performance
– For I/O throughput, users often use hundreds of drives today
– Flash enables I/O throughput without all the drives
– Large jobs complete faster
Flash Cache
How it works
Extended Buffer Cache
16 GB 120 GB
SGA Memory Flash Cache
Install Flash Drive in the Host Server
• Set two [Link] parameters:
• db_flash_cache_file = <filename>
• Specifies the path to the flash disk
• db_flash_cache_size=<size>
• Specifies the amount of flash disk to use
360 GB
Magnetic Disks
Flash Cache
How it works
Extended Buffer Cache
Hot Data
16 GB 120 GB
SGA Memory Flash Cache
1. Blocks read 2. Dirty blocks flushed to
into buffer disk
cache
Cold Data
360 GB
Magnetic Disks
Flash Cache
How it works
Extended Buffer Cache
Hot Data Warm Data
16 GB 120 GB
SGA Memory 3. Clean blocks Flash Cache
moved to
Flash Cache
based on
LRU* (once
SGA is full)
1. Blocks read 2. Dirty blocks flushed to
into buffer disk
cache
Cold Data
360 GB
Magnetic Disks
* Headers for Flash
Cached blocks kept in
SGA
Flash Cache
4. User Process Extended Buffer Cache
reads blocks
from SGA
(copied from
Flash Cache if
not in SGA)
Hot Data Warm Data
16 GB 120 GB
SGA Memory 3. Clean blocks Flash Cache
moved to
Flash Cache
based on
LRU*
1. Blocks read 2. Dirty blocks flushed to
into buffer disk
cache
Cold Data
360 GB
Magnetic Disks
* Headers for Flash
Cached blocks kept in
SGA
Performance
Plan Management
Something Has Changed…
• Challenging to "freeze" execution plans and statistics
• Difficulty:
– Statement has been parsed and a plan got created
Verification happens during execution:
GB
Parse Execute Plan acceptable
HJ
HJ
– Now some conditions get changed (statistics, upgrade, parameters)
• A new plan will be created - is it better or worse???
GB
Parse Execute Plan possibly
NL
not acceptable
NL
© 2011 Oracle Corporation 34
Without SQL Plan Management
• Optimizer - prevent execution plan changes:
– Rule Based Optimizer (RBO desupport since Oracle 10g - Note:189702.1)
– Hints
– Stored Outlines
– Rewriting SQL statements
– optimizer_features_enabled=n.n.n
– Change specific optimizer parameters
– Import and fix object and systems statistics
© 2011 Oracle Corporation 35
Availability
Availability
RAC One Node
Summary – RAC One Node
The “Always On” Single Instance Database
• Always On Single Instance Database
– Online replacement of servers and storage
– Online patching and upgrades of operating system
and infrastructure software
– Online database patching
– Automated cluster failover
• Better consolidation
– Extreme consolidation of servers & storage
– Load balancing to protect service levels
• Enhanced virtualization
– Extends and improves database availability and
flexibility when running in a virtual server
• Lower cost dev/test platform for RAC
– Develop and test against same tech stack
– Lower Oracle license cost
Use Case #1: Always On Single Instance Database
Opportunity:
• Eliminate database outages for operating
system upgrades/patching and database
patching
• Replace or migrate servers and storage
without downtime
• Reduce OP-EX via online maintenance – no
overtime pay, etc.
• Reduce cost and complexity of cluster
failover solutions by using single stack
Key Capabilities:
Value Proposition: • Fast storage failover
• Better availability than you get with 3rd-party • Rolling
cluster failover upgrades/patches
(online database
• Better availability than you get with a stand- relocation)
alone EE database
• Online storage add and
remove
RAC One Node
The “Always On” Single Instance Database
• RAC One extends benefits of
App server virtualization to single-
Servers
instance databases on physical
hardware
Office
Office
Front
Back
– Consolidation
– Live Migration
– Rolling Patches
Free
DW – Server failover
– Standardized DB environment
RAC One
• Online upgradeable to full RAC
© 2009 Oracle Corporation – Proprietary and Confidential
Oracle Database 11g Release 2
RAC ONE – A grid of virtualized single instance databases
© 2009 Oracle Corporation – Proprietary and Confidential
Oracle Database 11g Release 2
RAC ONE – A grid of virtualized single instance databases
• Automated Failover within Grid
© 2009 Oracle Corporation – Proprietary and Confidential
Oracle Database 11g Release 2
RAC ONE – A grid of virtualized single instance databases
• Automated Failover within Grid
© 2009 Oracle Corporation – Proprietary and Confidential
Oracle Database 11g Release 2
RAC ONE – A grid of virtualized single instance databases
• Automated Failover within Grid
• Live migration across Grid
© 2009 Oracle Corporation – Proprietary and Confidential
Oracle Database 11g Release 2
RAC ONE – A grid of virtualized single instance databases
• Automated Failover within Grid
• Live migration across Grid
© 2009 Oracle Corporation – Proprietary and Confidential
Oracle Database 11g Release 2
RAC ONE – A grid of virtualized single instance databases
• Automated Failover within Grid
• Live migration across Grid
• Rolling Patches
© 2009 Oracle Corporation – Proprietary and Confidential
Oracle Database 11g Release 2
RAC ONE – A grid of virtualized single instance databases
• Automated Failover within Grid
• Live migration across Grid
• Rolling Patches
© 2009 Oracle Corporation – Proprietary and Confidential
Oracle Database 11g Release 2
RAC ONE – A grid of virtualized single instance databases
• Automated Failover within Grid
• Live migration across Grid
• Rolling Patches
© 2009 Oracle Corporation – Proprietary and Confidential
Oracle Database 11g Release 2
RAC ONE – A grid of virtualized single instance databases
• Automated Failover within Grid
• Live migration across Grid
• Rolling Patches
© 2009 Oracle Corporation – Proprietary and Confidential
Oracle Database 11g Release 2
RAC ONE – A grid of virtualized single instance databases
• Automated Failover within Grid
• Live migration across Grid
• Rolling Patches
© 2009 Oracle Corporation – Proprietary and Confidential
Oracle Database 11g Release 2
RAC ONE – A grid of virtualized single instance databases
• Automated Failover within Grid
• Live migration across Grid
• Rolling Patches
• Online Upgrade to multi-node RAC instances
© 2009 Oracle Corporation – Proprietary and Confidential
Oracle Database 11g Release 2
RAC ONE – A grid of virtualized single instance databases
• Automated Failover within Grid
• Live migration of across Grid
• Rolling Patches
• Online Upgrade to multi-node RAC instances
© 2009 Oracle Corporation – Proprietary and Confidential
Use Case #2: Better Consolidation
DB DB DB DB
Opportunity: OS OS
• Improve utilization of servers and storage Storage Storage
• Reduce management cost through server, DB DB DB DB Storage Storage
storage and OS consolidation OS OS OS OS
OS Consolidation
• Simplify and standardize management tools Storage Storage
and process for all databases Storage Storage
VM Consolidation
Value Proposition:
• Improved flexibility, manageability, and
availability of servers and storage at lower Key Capabilities:
cost as compared with siloed databases
• Fewer servers, storage arrays, and operating • Instance Caging
systems to manage compared to VM • Online Load balancing
consolidation (online database
relocation)
• Storage Pooling
Use Case #2: Better Consolidation
Server Consolidation
Before Consolidation After Consolidation
5 servers 1 server
15% Average Percent Utilization 75% Average Percent Utilization
12% 15% 18% 13% 17%
DB-A DB-B DB-C DB-D DB-E DB-A DB-B DB-C DB-D DB-E
Instance Caging ensures each
database gets fair share of CPU
Use Case #2: Better Consolidation
Storage Pooling
Before Consolidation After Consolidation
• Disks are dedicated to databases • Storage is pooled across all
• No way to share capacity databases
• Some are maxed out, others have • Storage capacity is shared
free capacity • Heavily loaded databases draw
• Storage becomes bottleneck storage capacity from all disks
• Storage no longer a bottleneck
DB-A DB-B DB-C DB-D DB-E DB-A DB-B DB-C DB-D DB-E
Full Full 85% 85% 75% 50% Free – all disks
free free free
Use Case #2: Better Consolidation
Storage Pooling
Before Consolidation After Consolidation
• Disks are dedicated to databases • Storage is pooled across all
• No way to share capacity databases
• Some are maxed out, others have • Storage capacity is shared
free capacity • Heavily loaded databases draw
• Storage becomes bottleneck storage capacity from all disks
• Storage no longer a bottleneck
DB-A DB-B DB-C DB-D DB-E DB-A DB-B DB-C DB-D DB-E
Full Full 85% 85% 75% 20% Free – but fewer disks!
free free free
Manageability
Manageability
SQL Real time monitoring
SQL Real-Time Monitoring
• Dedicated statistics collected for a single execution of a
SQL statement when its execution becomes high-load
– Target:
• Parallel queries, parallel DML or parallel DDL
• Execution that exceeds 5 sec of CPU or I/O time
– Global SQL level statistics are collected: V$SQL_MONITOR
– Plan level statistics are collected (#rows, memory, temp space,
start/end date): V$SQL_PLAN_MONITOR
– Statistics are updated quasi real-time while the query executes
– Statistics for completed executions are retained for at least 5 minutes
– Feature switched on by default
• Part of Tuning Pack
• Note:1229904.1: Real-Time SQL Monitoring in 11g
© 2011 Oracle Corporation 59
Click on the Big Stuff – Follow the DB Time
• Spike in DB Time signals possible performance issue
• Click through to Top Activity for more details
Follow the DB Time – Top Activity Drilldown
• Top Activity Page confirms problem and shows highly skewed DB
Time by SQL ID. Click on top SQL ID for further details.
Click on the Big Stuff – Follow the DB Time
• SQL Details confirms this SQL produced the spike in DB Time and
shows parallel execution. Click on „SQL Monitoring‟ to see DB
Time by execution plan steps.
Follow the DB Time – SQL Execution Details
Manageability
Compression
Managing Data Growth
Reduce storage costs by factor of 10x
Manage Data Growth
Partition for performance, management and cost
ORDERS TABLE (7 years)
2003 2008 2009
95% Less Active 5% Active
Low End Storage Tier High End Storage Tier
2-3x less per terabyte
Significantly Reduce Storage Usage
Advanced OLTP Compression
• Compress large application tables
– Transaction processing, data warehousing
• Compress all data types
– Structured and unstructured data types
• Improve query performance
– Cascade storage savings throughout data center
Up To
4X
Compression
Compression
• Index compression available since 8i
• Table compression available since 9.2
– Designated for DWH
– Direct path and bulk INSERTs, CREATE TABLE .. AS SELECT
– Didn't work for INSERT/UPDATE/DELETE
• Before 11g syntax:
CREATE TABLE sales_history… COMPRESS
• New 11g syntax:
CREATE TABLE sales_history…
COMPRESS BASIC
– BASIC has superseded the clause FOR DIRECT_LOAD OPERATIONS in 11.2
© 2011 Oracle Corporation 68
Advanced Compression
• Advanced Compression in Oracle 11g includes:
– OLTP table compression
– RMAN backup compression (new algorithm)
– SecureFiles compression and deduplication
– DataPump export compression
– DataGuard redo transport compression
• Reduces resource requirements and costs!!!
– Storage and performance
• Typically 2x to 4x compression possible
PCTFREE PCTFREE
DML reached: Further DML reached:
uncompressed Compression! uncompressed Compression!
Overhead
Free Space
uncompressed
compressed
© 2011 Oracle Corporation 69
Real World Compression Results
10 Largest ERP Database Tables
Data Storage
2500
2000
1500
Table Scans
1000 0.4
500
0.3
DML Performance
0
0.2 40
3x Saving 30
0.1
20
0
10
2.5x Faster
0
< 3% Overhead
© 2009 Oracle Corporation – Proprietary and Confidential
Exadata Hybrid Columnar Compression
Highest Capacity, Lowest Cost
• Data is organized and compressed by column
– Dramatically better compression
• Speed Optimized Query Mode for Data
Warehousing
– 10X compression typical
Query
– Runs faster because of Exadata offload!
• Space Optimized Archival Mode for
infrequently accessed data
– 15X to 50X compression typical
Faster and Simpler
Backup, DR, Caching,
Benefits Multiply
Reorg, Clone
Copyright © 2010, Oracle Corporation and/or its affiliates – 71 –
Exadata Hybrid Columnar Compression
How it works
Compression • Tables are organized into sets of a few thousand rows
Unit – Compression Units (CUs)
• Within CU, data is organized by column, then compressed
– Column organization brings similar values close together,
enhancing compression
• Useful for data that is bulk loaded and queried
– Update activity is light
Reduces
4x to 50x • Compared to best conventional algorithms – Gzip, Bzip2
Table Size – Typically 2X the compression, 10X the performance
Reduction
4x to 40x
• Exadata servers offload filtering, projection, etc. for scans
on compressed data
– Indexed accesses return compressed blocks to database so
buffer cache benefits from compression
Copyright © 2010, Oracle Corporation and/or its affiliates – 72 –
Manageability
Resource Management
Instance Caging
• Instance Caging is an Oracle feature for “caging” or
limiting the amount of CPU that a database instance
can use at any time
• Important tool for server consolidation
• Available in [Link]
• Just 2 steps:
1. Set “cpu_count” parameter
• Maximum number of CPUs the instance can use at any
time
2. Set “resource_manager_plan” parameter
• Enables CPU Resource Manager
• E.g. out-of-box plan “DEFAULT_PLAN”
CPU Usage Without Instance Caging
Wait for CPU
on O/S run
queue Oracle processes
from one Database
Instance try to use
all CPUs
Running
Processes
© 2010 Oracle Corporation
CPU Usage With Instance Caging
Wait for CPU
on Resource
Manager run
queues
Instance Caging
limits the number
of Oracle
processes running
at any moment in
time
Running
Processes
© 2010 Oracle Corporation
Instance Caging Results
• 4 CPU server
• Workload is a mix of OLTP transactions, parallel queries, and
DMLs from Oracle Financials
© 2010 Oracle Corporation
Exadata I/O Resource Management
Mixed Workloads and Multi-Database Environment
• Ensure different databases are
allocated the correct relative amount of Database A Database B
I/O bandwidth
– Database A: 33% I/O resources
– Database B: 67% I/O resources
• Ensure different users and tasks within InfiniBand Switch/Network
a database are allocated the correct
relative amount of I/O bandwidth Exadata Cell Exadata Cell Exadata Cell
– Database A:
• Reporting: 60% of I/O resources
• ETL: 40% of I/O resources
– Database B:
• Interactive: 30% of I/O resources
• Batch: 70% of I/O resources
Copyright © 2010, Oracle Corporation and/or its affiliates – 78 –
I/O Scheduling, the Traditional Way
With traditional storage, disks service I/Os in FIFO order.
I/Os are reordered only to improve disk efficiency.
You cannot influence their behavior!
Report Report Report Report
A burst of Report I/Os will be queued ahead of OLTP I/O.
And serviced ahead of OLTP I/Os!
© 2010 Oracle Corporation
I/O Scheduling, the Exadata Way
I/O Resource Manager controls order that I/Os are issued to disk.
IORM issues enough I/Os to keep disks busy and efficient.
I/Os are queued per database, as necessary, within Exadata.
Exadata Storage Cell
Resource
Plan
I/O
Resource
Manager
Report Report Report Report
Uses Resource Plan to determine the order of I/O requests
Prevents a database from flooding the disk
© 2010 Oracle Corporation
Exadata I/O Resource Manager
How to configure I/O Resource Manager?
Customer Sales Finance
Service Warehouse Warehouse
Warehouse
Allocation: 30%
Allocation: 60% Allocation: 10%
Exadata
Storage
Configure an IORM Resource Plan, setting allocations for each database
Resource allocation is equivalent to disk utilization
Exadata I/O Resource Manager
What happens if the Customer Service database is
temporarily idle?
Customer Sales Finance
Service Warehouse Warehouse
Warehouse
Allocation: 30%
Allocation: 60% Actual: 75% Allocation: 10%
Actual: 0% Actual: 25%
Exadata
Storage
Resource plan specifies how unused bandwidth is reallocated
Goal is to keep disks fully utilized
Exadata I/O Resource Manager
What happens if the Finance database should not use
more than 20% of disk bandwidth?
Customer Sales Finance
Service Warehouse Warehouse
Warehouse
Allocation: 30%
Allocation: 60% Actual: 80% Allocation: 10%
Actual: 0% Limit: 20%
Actual: 20%
Exadata
Storage
Resource plan can specify hard utilization limits per database (new in [Link]!)
Useful for hosted environments and providing consistent performance
I/O Utilization Limit Results
100%
90%
80%
70%
60% No I/O Limit
Disk 50% 75% I/O Limit
Utilization
40% 50% I/O Limit
30% 25% I/O Limit
20%
10%
0%
Time
Queries from TPC-H benchmark suite
Disk utilization measured via iostat
© 2010 Oracle Corporation
Workloads Contending for CPU
When a database host has
100%
insufficient CPU for all
workloads, the workloads will
60%
compete for CPU.
Performance of all workloads
CPU 90%
Usage 80% will degrade!
40%
What if you cannot tolerate
performance degradations for
certain workloads?
OLTP Reports OLTP +
only only Reports
Managing Contending Workloads
100%
20%
CPU
With Resource Manager,
80% 90% 80% 90% you control how CPU
Usage
resources should be
allocated
10%
OLTP Reports
only only
OLTP + Reports OLTP + Reports
Resource Manager Enabled
OLTP Reports
Prioritized Prioritized
Configuring Resource Manager
1. Group sessions with similar performance objectives into
Consumer Groups
2. Allocate resources to consumer groups using Resource
Plans
3. Enable Resource Plan
Create Consumer Groups
– Create Consumer Groups for each type of workload, e.g.
• OLTP consumer group
• Reports consumer group
• Low-Priority consumer group
– Create rules to dynamically map sessions to consumer groups
Mapping Rules Consumer Groups
OLTP
service = „Customer_Service‟
client program = „Siebel Call Center‟
Oracle user = „Oscar‟ Reports
module = „AdHoc%‟
query has been running > 1 hour Low-Priority
estimated execution time of query > 1 hour
Create Resource Plans
Ratio Plan Priority Plan
Priority 1: OLTP
Reports Priority 2: Reports
30% Priority 3: Low-Priority
OLTP
60%
Low-Priority
10% Hybrid Plan with Hard Limits
Level 1 Level 2 Max
OLTP 90%
Reports 60% 75%
Low-Priority 40% 50%
Enable Resource Plan
• Enable manually
– Set resource_manager_plan parameter
• Enable automatically
– Configure a job scheduler window to use a resource plan
– Weekdays 8 am – 6 pm: DAYTIME_PLAN
– Weeknights and weekends: OFFHOURS_PLAN
Resource Manager User Interface
© 2010 Oracle Corporation
CPU Usage with Resource Manager
Sessions wait on
“resmgr:cpu quantum”
event
Oracle-
Internal CPU
Queue OLTP Reports
Resource Plan:
CPU Resource OLTP 75%
Sessions
Manager Reports 25%
scheduled every
100 ms (OLTP picked 3 out of 4 times)
© 2010 Oracle Corporation
Resource Manager, End to End
© 2010 Oracle Corporation
Risk and Change
Risk and Change
Online Application Upgrade
Online Application Upgrade
Edition-based redefinition
• Code changes are installed in the privacy of a new
edition
• Data changes are made safely by writing only to new
columns or new tables not seen by the old edition
• An editioning view exposes a different projection of
a table into each edition to allow each to see just its
own columns
• A crossedition trigger propagates data changes
made by the old edition into the new edition‟s
columns, or (in hot-rollover) vice-versa
Editions & object visibility
Object_4
Object_3
Object_2
Object_1
Pre-upgrade
edition
Editions & object visibility
Object_4 Object_4 (inherited)
Object_3 Object_3 (inherited)
Object_2 Object_2 (inherited)
Object_1 Object_1 (inherited)
Pre-upgrade is child of Post-upgrade
edition edition
Editions & object visibility
SQL> create or replace procedure Object_4....
SQL> create or replace view Object_3.....
Editions & object visibility
Object_4 Object_4* (actual)
Object_3 Object_3* (actual)
Object_2 Object_2 (inherited)
Object_1 Object_1 (inherited)
Pre-upgrade is child of Post-upgrade
edition edition
Editions & object visibility
SQL> drop procedure Object_4....
Editions & object visibility
Object_4 (Object_4) (dropped)
Object_3 Object_3 (actual)
Object_2 Object_2 (inherited)
Object_1 Object_1 (inherited)
Pre-upgrade is child of Post-upgrade
edition edition
Online Application Upgrade
Edition-based redefinition
SQL> create user demo identified by demo;
User created.
SQL> grant create session, create procedure to demo;
Grant succeeded.
SQL> create edition version2 as child of ora$base;
Edition created.
Online Application Upgrade
Edition-based redefinition
SQL> connect demo/demo
Connected.
SQL>
SQL> create or replace procedure my_procedure
2 as
3 begin
4 dbms_output.put_line( 'Hello World, I am version 1.0' );
5 end;
6 /
Procedure created.
SQL> create or replace procedure my_procedure2
2 as
3 begin
4 my_procedure;
5 end;
6 /
Procedure created.
Online Application Upgrade
Edition-based redefinition
SQL> exec my_procedure2
Hello World, I am version 1.0
PL/SQL procedure successfully completed.
Online Application Upgrade
Edition-based redefinition
SQL> connect sys/welcome1
Connected.
SQL> alter user demo enable editions;
User altered.
SQL> grant use on edition version2 to demo
Grant succeeded.
Online Application Upgrade
Edition-based redefinition
SQL> connect demo/demo;
Connected.
SQL> alter session set edition = version2;
Session altered.
SQL> select object_name, object_type, status, edition_name
from user_objects;
OBJECT_NAME OBJECT_TYPE STATUS EDITION_NAME
---------------- -------------- ------- ---------------
MY_PROCEDURE PROCEDURE VALID ORA$BASE
MY_PROCEDURE2 PROCEDURE VALID ORA$BASE
Online Application Upgrade
Edition-based redefinition
SQL> create or replace procedure my_procedure
2 as
3 begin
4 dbms_output.put_line( 'Hello World, I am version 2.0' );
5 end;
6 /
Procedure created.
SQL> select object_name, object_type, status, edition_name from
user_objects;
OBJECT_NAME OBJECT_TYPE STATUS EDITION_NAME
--------------- ------------------- ------- --------------
MY_PROCEDURE2 PROCEDURE VALID ORA$BASE
MY_PROCEDURE PROCEDURE VALID VERSION2
Online Application Upgrade
Edition-based redefinition
SQL> SELECT SYS_CONTEXT('userenv', 'current_edition_name')
FROM DUAL;
SYS_CONTEXT('USERENV','CURRENT_EDITION_NAME')
--------------------------------------------------
VERSION2
SQL> exec my_procedure2
Hello World, I am version 2.0
PL/SQL procedure successfully completed.
Online Application Upgrade
Edition-based redefinition
SQL> connect demo/demo
Connected.
SQL> SELECT SYS_CONTEXT('userenv', 'current_edition_name')
FROM DUAL;
SYS_CONTEXT('USERENV','CURRENT_EDITION_NAME')
----------------------------------------------
ORA$BASE
SQL> exec my_procedure2
Hello World, I am version 1.0
PL/SQL procedure successfully completed.
Online Application Upgrade
Edition-based redefinition
SQL> alter session set edition = version2;
Session altered.
SQL> SELECT SYS_CONTEXT('userenv', 'current_edition_name')
FROM DUAL;
SYS_CONTEXT('USERENV','CURRENT_EDITION_NAME')
-----------------------------------------------
VERSION2
SQL> exec my_procedure2
Hello World, I am version 2.0
PL/SQL procedure successfully completed.
Risk and Change
Isolating change
Isolating Change – Invisible Indexes
• OPTIMIZER_USE_INVISIBLE_INDEXES
• Default: FALSE
• Values: TRUE
FALSE
• Purpose: "Invisible" indexes will be ignored by the optimizer if set to
FALSE. But DMLs will be still executed to the index.
• Motivation: Isolated testing of performance effects of an
index based on a session level
• Example:
CREATE INDEX emp_ename ON emp(ename)
INVISIBLE;
ALTER SESSION SET
OPTIMIZER_USE_INVISIBLE_INDEXES=TRUE;
© 2011 Oracle Corporation 113
Isolating Change – PENDING Statistics
• OPTIMIZER_USE_PENDING_STATISTICS
• Values: TRUE | FALSE
• Purpose: Mitigate risk with new stats post upgrade
• Use case:
• Switch on pending stats temporarily:
DBMS_STATS.SET_GLOBAL_PREFS('PENDING','TRUE');
• Gather new Oracle 11g statistics:
DBMS_STATS.GATHER_TABLE_STATS('APPUSER','TAB_1');
• Test your critical SQL statement(s) with pending stats:
ALTER SESSION SET optimizer_use_pending_statistics=TRUE;
• When proven publish the new Oracle 11g statistics:
DBMS_STATS.PUBLISH_PENDING_STATS();
© 2011 Oracle Corporation 114
Risk and Change
Real Application Testing (RAT)
Real Application Testing
• Goal:
– Record and replay a real workload to see how the new system
performs
– Find regressions and changing plans before the upgrade
• Licensable database pack "Real Application Testing"
Available since Oracle Database [Link]
Available with patch set [Link]/5
Available as single patch for [Link] and [Link]/3
For patch numbers please see Note:560977.1
© 2011 Oracle Corporation 116
Database Replay Overview
• Replay actual production database workload in test environment
• Identify, analyze and fix potential instabilities before making changes to
production
• Capture Workload in Production
– Capture full production workload with real load, timing & concurrency
characteristics
– Move the captured workload to test system
• Replay Workload in Test
– Make the desired changes in test system
– Replay workload with full production characteristics
– Honor commit ordering
• Analyze & Report
– Errors
– Data divergence
– Performance divergence
© 2011 Oracle Corporation 117
Step 1: Workload Capture
• All external client requests Production System
captured in binary files
Client Client Client
• System background and internal …
activity excluded
File System
• Minimal overhead
– Avoids function call when possible Middle Tier
– Buffered I/O
File 1
• Independent of client protocol
• Can capture on [Link] and replay File 2
on 11g
…
• Capture load for interesting time
File n
period, e.g., peak workload,
month-end processing, etc. Storage
Step 2: Process Workload Files
• Setup test system
Test System
– Application data should be same
as production system as of
capture start time
– Use RMAN, Snapshot Standby,
imp/exp, Data Pump, etc. to File 1
create test system File 1
File 2
– Make change: upgrade db and/or
OS, change storage, migrate File 2
…
platforms, etc.
…
• Processing transforms captured File n
data into replayable format
File n
• Once processed, workload can be
replayed many times Metadata
• For RAC copy all capture files to Replay Files
Capture Files
single location for processing
Step 3: Replay Workload
• Replay captured workload
Replay Clients
– Replayed operations see
the same data and
perform the same work
…
– Preserve timing and
File 1
concurrency Timing Preservation
characteristics Re-mapping
– Same number of user File 2
connections
• Replay Client …
– Multithreaded OCI Client Server Server … Server
– Drives multiple captured Commit Order Synchronization File N
processes
Sequence Replay
– Scalable Architecture Background
Metadata
– Interprets capture into
sequence of OCI calls
Replay Files
– Functional replay
Step 4: Analysis & Reporting
• Error Divergence: For each call error divergence is reported
– New: Error encountered during replay not seen during capture
– Not Found: Error encountered during capture not seen during
replay
– Mutated: Different error produced in replay than during capture
• Data Divergence
– Replay: Number of rows returned by each call are compared and
divergences reported
– User: Application level validation scripts
• Performance Reporting
– Capture and Replay Report: Provides high-level performance
information
– ADDM Report: Provides in-depth performance analysis
– AWR, ASH Report: Facilitates comparative or skew analysis
SQL Performance Analyzer: Overview
Production Test
No clients/middle tier necessary
Clients/
Middle Tier
SQL Performance Analyzer
Oracle
Database
Capture SQLs Re-execute SQLs
Transport SQLs
© 2011 Oracle Corporation 122
Other!!!
LONG LOB Conversion
• LONG data type is still supported, but Oracle
recommends conversion to LOB types
LONG LONG RAW
CLOB NCLOB BLOB
– Disadvantages of LONG:
• Maximum number of LONG columns per table : 1
• No replication possible with LONG and LONG RAW
• Attention:
– LONG LOB conversion is irreversible
© 2011 Oracle Corporation 124
LONG LOB Conversion
• Conversion:
ALTER TABLE long_tab MODIFY ( long_col CLOB );
• Online Table Redefinition
– Available for LONG/LOB since 10g
– Package: DBMS_REDEFINITION
– Regulation:
• LONG CLOB
• LONG RAW BLOB
– Conversion is done implicitly
• For scripted examples see: Note:251417.1, Note:556283.1 and
Note:1012454.7
– Advantage:
Source table is available during the whole process
© 2011 Oracle Corporation 125
SecureFiles
• Securefiles = new LOB storage technology
– Better performance
– Additional features: deduplication, encryption
• Examples:
CREATE TABLE t1 (a CLOB) LOB(a) STORE AS SECUREFILE;
CREATE TABLE t2 (a CLOB) LOB(a) STORE AS SECUREFILE
(DEDUPLICATE);
CREATE TABLE t3 (a CLOB ENCRYPT USING 'AES128')
LOB(a) STORE AS SECUREFILE (CACHE);
– Tablespace must be ASSM managed
– Initialization parameter
DB_SECUREFILE = [NEVER | PERMITTED | ALWAYS | IGNORE]
CREATE TABLE t1 (a CLOB); -- this will be a Securefile!!!
© 2011 Oracle Corporation 126
SecureFiles
• Securefiles are fully transparent
– Data type still a BLOB/CLOB
– Definition per storage clause or controlled by init Parameter
– No functional differences
• Even "deduplication" is fully transparent for developers
• API access:
PL/SQL (DBMS_LOB), JDBC, .NET, PHP, ...
• Reference:
Wellcome Sanger Trust Institute - ~500TB Database
[Link]
[Link]?ssSourceSiteId=otnen
© 2011 Oracle Corporation 127
External Tables can run code now
• External tables allow for a preprocessor
– Program is run when you SELECT from external table
– The „location‟ is passed to the script/executable
– The executable does whatever it wants and writes to stdout
– stdout is treated as the input file
• We need a way to control who can do what
• GRANT EXECUTE ON DIRECTORY handles that
EXECUTE and PREPROCESSOR
SQL> CREATE or replace DIRECTORY load_dir
2 AS '/mnt/hgfs/docs/Presentations/Seminar/11gr2'
3 /
Directory created.
SQL> CREATE or replace DIRECTORY exec_dir
2 AS '/mnt/hgfs/docs/Presentations/Seminar/11gr2'
3 /
Directory created.
EXECUTE and PREPROCESSOR
SQL> CREATE TABLE EMP_ET
2 (
3 "EMPNO" NUMBER(4),
4 "ENAME" VARCHAR2(10),
5 "JOB" VARCHAR2(9),
6 "MGR" NUMBER(4),
7 "HIREDATE" DATE,
8 "SAL" NUMBER(7,2),
9 "COMM" NUMBER(7,2),
10 "DEPTNO" NUMBER(2)
11 )
12 ORGANIZATION EXTERNAL
13 ( TYPE oracle_loader
14 DEFAULT DIRECTORY load_dir
15 ACCESS PARAMETERS
16 ( RECORDS DELIMITED BY NEWLINE
17 PREPROCESSOR exec_dir:'run_gunzip.sh'
18 FIELDS TERMINATED BY "|" LDRTRIM
19 )
20 location ( '[Link]')
21 )
22 /
Table created.
EXECUTE and PREPROCESSOR
SQL> !file [Link]
[Link]: gzip compressed data, was "[Link]", from Unix, last
modified: Wed Oct 7 [Link] 2009
SQL> !cat run_gunzip.sh
#!/bin/bash
/usr/bin/gunzip -c $*
SQL> select empno, ename from emp_et where rownum <= 5;
EMPNO ENAME
---------- ----------
7369 SMITH
7499 ALLEN
7521 WARD
7566 JONES
7654 MARTIN
Extending ASM to Manage ALL Data
Database RAC Application 3rd Party
FS
Automatic Storage Management (ASM)
ACFS Snapshot
ASM Cluster & Single
ASM Instance Node File System
Managing (ACFS)
Oracle DB
Files Dynamic Volume Manager
Database Files General Purpose Files
ASM
Dynamic Volumes
Disk DB
DBDatafiles
Datafiles
Group
Database files
ACFS FS 3rd Party
Oracle Bin FS
OCR & Voting files
What is ASM Cluster File System (ACFS)
• General purpose scalable file system
– Journaling, extent based
– Single node and cluster
– POSIX, X/OPEN file system solution for UNIX/Linux
– Windows file system solution for Windows platforms
• Accessible through NAS protocols (NFS, CIFS)
• Leverages ASM technology for volume mgt
• Integrated with Oracle Clusterware for cluster support
• Multi OS platform (Linux and Windows at initial release)
• Integrated with Oracle system mgt tools
– Oracle installation and configuration
– Enterprise Manager and ASM Storage mgt tools
– Native OS File System Management tools