Dynamic SGA
The dynamic SGA infrastructure will allow for the sizing of the Buffer
Cache, Shared Pool and the Large Pool (see note below) without having to
shutdown the database. In this new model, a new unit of allocation is created
called the 'Granule'. A granule is a unit of contiguous virtual memory
allocation. The size of a granule depends on the estimated total SGA size,
whose calculation is based on the value of the parameter SGA_MAX_SIZE.
This would be 4MB if the SGA size is less than 128MB, else it will be 16MB.
The Buffer Cache, Shared Pool and Large Pool are allowed to grow and
shrink based on granule boundaries.
At instance startup the Oracle Server allocates the granule entries,
one for each granule to support SGA_MAX_SIZE bytes of address space. During
the startup each component acquires as many granules as it requires.
The minimum SGA is three granules, as follows:
1. One Granule for Fixed SGA (includes redo buffers)
2. One Granule for Buffer Cache
3. One Granule for Shared Pool
We can alter the granules allocated to components using the 'ALTER SYSTEM'
command. The granules are rounded up to the nearest of the default graunule
size (4MB or 16MB). Example:
alter system set shared_pool_size=64m
The Power of Cursor Sharing in 9i
Sql statements differ only in literals can be forced to share a cursor.
This is a new feature in 8i and can be switched on with the parameter setting
cursor_sharing=force. This is accomplished by transforming the statement
into a new one by replacing the literals with system generated bind variables.
The problem with bind variables is that the optimizer cannot predict precise
selectivity.
Using cursor_sharing = force will parse the FIRST sql statement loaded into memory and will use this execution plan for the subsequent similar sql statements. This might be a problem if the FIRST statement's literals have good selectivity but those of the subsequent statements don't. In this case the statements with poor selectivity will also use the index whereas a full table scan would perform better. However, if the firstly run statement doesn't have good selectivity and a full table scan is used, the subequent similar statements will also use FTS.
To overcome this problem in 9i we can use cursor_sharing=similar setting
which makes the optimizer examine the histograms associated with the columns.
CASE statements
and expressions
CASE statements and expressions are a shorthand way of representing
IF/THEN choices with multiple alternatives
CASE Expressions
A CASE expression selects a result and returns it. To select the result,
the CASE expression uses a selector, an expression whose value is used
to select one of several alternatives. A CASE expression has the following
form:
CASE selector
WHEN expression1 THEN result1
WHEN expression2 THEN result2
...
WHEN expressionN THEN resultN
[ELSE resultN+1]
END;
The selector is followed by one or more WHEN clauses, which are checked sequentially. The value of the selector determines which clause is executed. The first WHEN clause that matches the value of the selector determines the result value, and subsequent WHEN clauses are not evaluated. An example follows:
DECLARE
grade CHAR(1);
appraisal VARCHAR2(20);
BEGIN
...
appraisal :=
CASE grade
WHEN 'A' THEN 'Excellent'
WHEN 'B' THEN 'Very
Good'
WHEN 'C' THEN 'Good'
WHEN 'D' THEN 'Fair'
WHEN 'F' THEN 'Poor'
ELSE 'No such grade'
END;
...
END;
The optional ELSE clause works similarly to the ELSE clause in an IF
statement. If the value of the selector is not one of the choices covered
by a WHEN clause, the ELSE clause is executed. If no ELSE clause is provided
and none of the WHEN clauses are matched, the expression returns NULL.
CASE Statement
Like the IF statement, the CASE statement selects one sequence of statements
to execute. However, to select the sequence, the CASE statement uses a
selector rather than multiple Boolean expressions. Ecample:
CASE grade
WHEN 'A' THEN dbms_output.put_line('Excellent');
WHEN 'B' THEN dbms_output.put_line('Very Good');
WHEN 'C' THEN dbms_output.put_line('Good');
WHEN 'D' THEN dbms_output.put_line('Fair');
WHEN 'F' THEN dbms_output.put_line('Poor');
ELSE dbms_output.put_line('No such grade');
END CASE;
The CASE statement is more readable and more efficient. So, when possible, rewrite lengthy IF-THEN-ELSIF statements as CASE statements.
The CASE statement begins with the keyword CASE. The keyword is followed by a selector, which is the variable grade in the last example. The selector expression can be arbitrarily complex. For example, it can contain function calls. Usually, however, it consists of a single variable. The selector expression is evaluated only once. The value it yields can have any PL/SQL datatype other than BLOB, BFILE, an object type, a PL/SQL record, an index-by-table, a varray, or a nested table.
Examples
1 - The following example shows a simple CASE statement. Notice that
you can use multiple statements after a WHEN clause.
DECLARE
n number;
BEGIN
CASE n
WHEN 1 THEN dbms_output.put_line('n = 1');
WHEN 2 THEN
dbms_output.put_line('n = 2');
dbms_output.put_line('That implies n
> 1');
ELSE dbms_output.put_line('n is some other value.');
END CASE;
END;
2 - The following example shows a searched CASE statement. Notice that the WHEN clauses can use different conditions rather than all testing the same variable or using the same operator. Because this example does not use an ELSE clause, an exception is raised if none of the WHEN conditions are met.
DECLARE
quantity NUMBER;
projected NUMBER;
needed NUMBER;
BEGIN
<<here>>
CASE
WHEN quantity is null THEN
dbms_output.put_line('Quantity not available');
WHEN quantity + projected >= needed THEN
dbms_output.put_line('Quantity ' ||
quantity ||
' should be enough if projections
are met.');
WHEN quantity >= 0 THEN
dbms_output.put_line('Quantity ' ||
quantity || ' is probably
not enough.');
END CASE here;
EXCEPTION
WHEN CASE_NOT_FOUND THEN
dbms_output.put_line('Somehow quantity
must be less than 0.')
END;
New Date/Time Types
The new datatype TIMESTAMP records time values including fractional
seconds. New datatypes TIMESTAMP WITH TIME ZONE and TIMESTAMP WITH LOCAL
TIME ZONE allow you to adjust date and time values to account for time
zone differences. You can specify whether the time zone observes daylight
savings time, to account for anomalies when clocks shift forward or backward.
New datatypes INTERVAL DAY TO SECOND and INTERVAL YEAR TO MONTH represent
differences between two date and time values, simplifying date arithmetic
Oracle9i Data Guard
One of the biggest threats to a DBA's future career is being even partially
responsible for the loss of valuable business data. An activity high on
the administrators list of priorities is to have a well tested backup and
recovery plan. Often applications serving web based content cannot
tolerate the extended downtime required to restore a database from backups.
Meeting demands for robust data protection from all types of failures
coupled with high data availability have been the primary drivers behind
the development of Oracle9i Data Guard. It is built on the solid
foundations of Oracle standby database technology coupled with the proven
Oracle online backup and recovery capabilities.
Data Guard maintains one or more synchronized copies of a customers
production data. An Oracle9i Data Guard configuration consists of a collection
of loosely connected systems, that combine the primary database and physical
standby databases into a single, easily managed disaster recovery solution.
Often, the sites in a Data Guard configuration are dispersed geographically
and connected by Oracle*Net. As the primary database is modified, the physical
standby database is sent log information generated by changes made to the
primary database. These changes are applied to the standby database, which
runs in managed recovery mode. While the primary database is open and active,
a standby database is either performing a recovery or open for reporting
access. Should something go wrong with primary, a standby database can
be activated to replace it.
Oracle9i LogMiner
Have you ever wondered who was responsible for changing the salary
table to zero out your fellow DBA's pay or entering all the new prices
in the online store without using the decimal point? Would it be useful
to locate the offending SQL statement and be provided with SQL required
to correct it? This is a great tool for undoing a single erroneous transaction
without having to go through a database restore.
Every change made to an Oracle database by default generates undo and
redo information which is accumulated in Oracle redo log files. Oracle9i
LogMiner is an integrated feature of the Oracle9i Database that provides
DBA's and auditors with the infrastructure required for relational access
to Oracle's redo stream.
Some significant enhancements to LogMiner for Oracle9i generated log
files include:
Oracle9i Flashback Query
Under normal circumstances, database operations are performed on most
recently committed data. However, there can be situations where a user
performs an erroneous DML function, such as, an update, delete, or insert,
and then commits the change. In the past, if the user was not able to manually
fix the mistake, a situation encountered quite frequently, the only recourse
was to ask the database administrator to perform point-in-time database
recovery. With Oracle 9i's Flashback Query feature users can query data
at a point in time in the past by specifying either the particular time
or the System Change Number (SCN). This gives users the ability to easily
recover from DML errors without requiring any structural changes to the
database or intervention from database administrators. Using this feature
users can recover from mistakes by simply restoring the data prior to the
erroneous transaction
How does Flashback Query Work?
The Oracle database ensures read consistency of the data by storing
the image of the data prior to any change in undo or rollback segments.
When a user makes a change to some data and then decides not to commit
it, the change can be undone by issuing the rollback command. This undo
is implemented by retrieving the image of the data prior to the change
from the undo segments and recreating the old data. Flashback Query mechanism
is based on this feature. Oracle 9i's Automatic Undo Management feature
lets database administrators specify how long they want to retain old data
in the database. This is done by setting the UNDO_RETENTION initialization
parameter. By setting this parameter and allocating sufficient storage
space for the undo tablespace, database administrators can control how
far back data can be queried using the Flashback Query feature.
Using Flashback Query
Before Flashback Query functionality can be used by ordinary users,
some actions are required from the database administrators:
Flashback Query Applications
The Flashback Query feature can be used for:
Online Table Redefinition
One of the most exciting new online features in Oracle9i is the online
table redefinition feature. The new feature allows Oracle database administrators
to redefine tables online:
There are five basic steps to redefine a table:
1. Create a new image of the table with all of the desired attributes.
2. Start the redefinition process.
3. Create any triggers, indexes, grants and constraints on the new
image of the table.
4. Optionally synchronize and validate data in the new image of the
table periodically.
5. Complete the redefinition of the table
Database Resource Manager
One of the most critical challenges for database administrators is
to maintain a given performance level with limited hardware resources.
Traditionally, it has been up to the operating system (OS) to regulate
the resource management among the various applications running on a system
including Oracle databases. However, since the OS can not distinguish one
Oracle user/session from another, it can not perform any resource management
activities among different users or applications sharing a database.
The Database Resource Manager, introduced in Oracle8i, provides database administrators the capability to distribute system resources among various users and applications sharing a database in a manner consistent with business priorities. Using this tool, a DBAs can divide the various users and applications using a database into different resource consumer groups and allocate CPU resources to each one of them in terms of percentages by creating a resource plan.
A Resource Plan can help ensure that system resources are first made available to certain critical business operations before other users or applications can use them. It is also possible to cap the resource consumption of a group of users or an application so that the critical operations are not starved for resources when they need them. The ability to allocate resources at multiple levels and create nested resource plans provides database administrators with an extremely powerful and flexible mechanism to specify how the unused resources should be divided among different resource consumer groups.
Oracle9i features a significantly enhanced Database Resource Manager with new capabilities to enable automatic and proactive management of database workload so as to ensure compliance with service level objectives. Using Oracle9i Database Resource Manager, a database administrator can
Automatic Segment Management
(MB)
The Automatic Segment Space Management feature is available only with
locally managed tablespaces. A new clause SEGMENT SPACE MANAGEMENT in the
CREATE TABLESPACE command allows administrators to choose between automatic
and manual modes. A tablespace created with MANUAL segment space management
continues to use FREELISTS for managing free space within the objects located
in it. The following example illustrates how to create a tablespace
with Automatic Segment Space Management.
CREATE TABLESPACE data DATAFILE '/u02/oracle/data/data01.dbf' SIZE 50MAll objects created in the above tablespace will use the automatic segment space management feature to manage their free space. Any specification of PCTUSED, FREELISTS and FREELIST GROUPS parameters for objects created in this tablespace will be ignored. A new column called SEGMENT_SPACE_MANAGEMENT has been added to the DBA_TABLESPACES view to indicate the segment space management mode used by a tablespace.
EXTENT MANAGEMENT LOCAL
SEGMENT SPACE MANAGEMENT AUTO;
Management Enhancements
With the introduction of Oracle 9i, Oracle has improved many areas
of the management infrastructure to make it easier for the DBA to automate
the day to day management aspects of the database.
Recovery Manager Improvements : RMAN in 9i implements some new features that are designed to reduce the time and effort spent by administrators in performing routine activities related to backup and recovery. RMAN features a new friendly interface and enhanced reporting. To ease backup and recovery operations, RMAN provides for backup configuration, automatic management of backups and archived logs based on user specified recovery window. RMAN also implements a new policy named recovery window which will control when backups expire. These features reduce the time spent by DBA's on day to day backup management activities.
Rollback Segment Management : Until 8i, DBA's had to carefully plan for rollback segment size, assignment of transactions and also tune the number and size of the segments. With 9i, Administrators can assign a tablespace for Rollback Space and let the database take care of issues like size, space utilization, block contention and consistent read retention.
Dynamic Memory Management : Database Administrators had to traditionally shutdown Oracle Instances in order to grow or shring the SGA components. Dynamic Memory Management feature allows the resizing of the buffer cache and shared pool dynamically. 9i also introduces self tuning of working memory for SQL execution by tuning the initialization parameters controlling the allocation of private memory.
Persistent INIT.ora : 9i introduces the persistence of INIT.ora across multiple shutdowns. This allows remote activities like startup without having a local copy of the INIT.ora and also provides immense help in database performance tuning as parameter changes that happened due to Internal Self Tuning and tuning changes made by tools like Oracle Enterprise Manager can persist across shutdowns.
Oracle 9i also introduces various features that ease management like
Execution plan history that lets the DBA investigate a bad statement without
having to reexecute it. The introduction of Oracle Managed Files lets the
DBA handle the space management and removes the extra work of having to
manage the files in the database. A default TEMPORARY tablespace has been
introduced in 9i reducing the annonying SYSTEM Tablespace usage for temporary
space.
Resumable Statements
Large operations such as batch updates or data loads can encounter
out of space failures after executing for a long period of time, sometimes
when they are just about to complete. Under such a circumstance, database
administrators are left with no other choice but to re-execute the failed
job under a close supervision in order to ensure its successful completion.
This could be an extremely time and resource consuming exercise which can
disrupt business activities significantly especially if the failed job
is responsible for performing a critical business task such as the month
end financial closure.
Oracle9i introduces a new feature called "Resumable Space Allocation" which allows users to avoid such failures by instructing the database to suspend any operations that encounter an out of space error rather than aborting them. This provides an opportunity for users or administrators to fix the problem that caused the error while the operation is suspended and automatically resume its execution once the problem has been resolved. This feature enhances the ability of the Oracle database to tolerate failures on account of unforeseen space requirements and eliminates the wastage of precious administrator time and system resources spent on re-execution of failed operations. By allowing administrators to intervene in the middle of the execution of an operation, Resumable Space Allocation obviates the need of dividing a large job into smaller sub-jobs in order to limit the impact of any failure. Additionally, it will also enable application developers to write applications without worrying about running into space related errors.
A statement can be executed in the "resumable" mode when explicitly specified by using the ALTER SESSION ENABLE RESUMABLE command. Virtually any kind of operation be it a PL/SQL stored procedure, Java stored procedure, queries, SQL*loader, export/import, DML (such as UPDATE, INSERT) and DDL (CREATE TABLE AS SELECT¿., CREATE INDEX , INDEX REBUILD, ALTER TABLE MOVE PARTITION etc..) can all be run as a "resumable" statement. A "resumable" operation will be suspended whenever it encounters one of the following types of failures:
When the problem that caused the failure is fixed, the suspended statement automatically resumes execution. If the operation encounters a transient problem which may disappear automatically after some time such as temporary tablespace shortage, no administrator intervention may be required to resume its execution. A "resumable" query running out of temporary space may resume automatically with absolutely no user or administrator intervention once other active queries complete. A "resumable" operation may be suspended and resumed multiple times during its execution.
Every "resumable" operation has a time-out period associated with it.
The default value of time-out period is 2 hours but can be set to any value
using the ALTER SESSION ENABLE RESUMABLE TIMEOUT <time-out period in
seconds> command. A suspended operation will automatically be aborted if
the error condition is not fixed within "time-out" period. An administrator
can abort a suspended operation any time using DBMS_RESUMABLE.ABORT() procedure.
Undo Tablespaces
Oracle Database keeps records of actions of transactiion, before they
are committed and Oracle needs this information to rollback or Undo the
Changes to the database. These records in Oracle are called Rollback or
Undo Records. These records are used to Rollback transactions when a rollback
statement is issued or during recovery of a database or to provide a read
consistent view of data.
Until Oracle 8i and also in 9i, Oracle uses Rollback Segments to manage
the Undo Data. Starting with Oracle 9i, DBA's are provided with some a
new feature referred to as "Undo Tablespace" which allows the dba to exert
more control on how long Undo information is retained and also eliminates
the complexity of managing Rollback Segments in certain environments.
Starting with 9i, the rollback segment way is referred to as Manual
Undo Management Mode and the new Undo Tablespaces method as the Automatic
Undo Management Mode. Although both rollback Segments and Undo Tablespaces
are supported in Oracle 9i, Both modes cannot be used. System Rollback
segment exists in both the modes. Since we are all familiar with the manual
mode, lets look at the features of the Automatic Undo Management (Undo
Tablespaces )
Oracle9i Enterprise Manager allows you to create a new undo tablespace
for a database. In addition, under instance management, a new "Undo"
tab displays the name of the active undo tablespace and the current undo
retention time. Administrators can modify the retention time based on their
largest transaction time and immediately view the space required for the
undo tablespace. Undo generation rates are calculated based on statistics
available for undo space consumption for the current instance. The
"Undo Space Estimate" graph indicates the space requirements based on a
maximum undo generation rate and on an average undo generation rate and
relates them to the current Undo setting
Init.ora Parameters for Automatic Undo Management
UNDO_MANAGEMENT : This parameter sets the mode in which oracle manages
the Undo Information.
The default value for this parameter is MANUAL so that all your old
init.ora files can be used without any changes. To set the database in
an automated mode, set this value to AUTO. ( UNDO_MANAGEMENT = AUTO)
UNDO_TABLESPACE : This parameter defines the tablespaces that are to be used as Undo Tablespaces. If no value is specified oracle grabs the first available Undo Tablespace or if there are none present, Oracle will use the system rollback segment to startup. This value is dynamic and can be changed online ( UNDO_TABLESPACE = undo_tbs1 )
UNDO_RETENTION : This value specifies the amount of time, Undo is kept in the tablespace. This applies to both committed and uncommitted transactions since the introduction of FlashBack Query feature in Oracle needs this information to create a read consistent copy of the data in the past. Default value is 900 Secs ( UNDO_RETENTION = 500)
UNDO_SUPRESS_ERRORS : This is a good thing to know about in case your code has the alter transaction commands that perform manual undo management operations. Set this to true to suppress the errors generated when manual management SQL operations are issued in an automated management mode.
Creating and Managing Undo Tablespaces :
Undo tablespaces use syntax that is similar to regular tablespaces
except that they use the keyword UNDO. These tablespaces can be created
during the database creation time or can be added to an existing database
using the create UNDO Tablespace command
Create DATABASE uday controlfile ...........
UNDO Tablespace undo_tbs0 datafile '/vol1/data/uday/undotbs0.dbf' ...
Create UNDO Tablespace undo_tbs1
datafile '/vol1/data/uday/undotbs1.dbf' size 25m autoextend on;
All operations like Renaming a data file, Adding a datafile, Online /Offline Swith or Start Backup / End Backup Switch can be made using the regular alter tablespace command. All other operations are managed by Oracle in the automated management mode.
Monitoring :
v$UNDOSTAT : This view contains statistics for monitoring the effects
of transaction execution on Undo Space in the current instance. These are
available for space usage, transaction concurrency and length of query
operations. This view contains information that spans over a 24 hour period
and each row in this view contains data for a 10 minute interval specified
by the BEGIN_TIME and END_TIME.
New
Initialization Parameters for Buffer Cache Sizing
The Buffer Cache consists of independent sub-caches for buffer pools
and for multiple block sizes. The parameter db_block_size determines the
primary block size. This would be the block size used for the System tablespace.
The following are the Size parameters which define the sizes of the
caches for buffers for the primary block size:
DB_CACHE_SIZE
DB_KEEP_CACHE_SIZE
DB_RECYCLE_CACHE_SIZE
The db_keep_cache_size and db_recycle_cache_size are independent of
db_cache_size.
These parameters are specified in units of memory rather than in units
of buffers (as is the case in Oracle8i, or below).
Initialization Parameters Affected
The following parameters are automatically computed:
DB_BLOCK_LRU_LATCHES - The number of LRU latches
in each buffer pool for
each block size will be
equal to the half the number of CPUs.
DB_WRITER_PROCESSES - The number of DBWR's will be
equal to 1/8th the
number of CPUs.
The following parameters have been deprecated and have been maintained only for backward compatibility:
DB_BLOCK_BUFFERS
BUFFER_POOL_KEEP
BUFFER_POOL_RECYCLE
These parameters cannot be combined with the dynamic SGA feature parameters.Setting
these along with the Dynamic SGA parameters would error out.
For example, if db_block_buffers as well as db_cache_size are set,
then startup would error out as follows:
SQL> startup pfile=initv09.ora
ORA-00381: cannot use both new and old parameters
for buffer cache size specification
Transport
tablespaces with different block sizes between databases
Tablespace transportation between databases owning different block
sizes is allowed in Oracle9i.
Oracle9i offers the possibility to plug into a 9i database any tablespace
transported from 8i databases of any block size.
Plug an 8i tablespace of 2K db block size into a 9i database of 4K
db block size
1. Creation of the tablespace in the 8.1.5 database (db_block_size=2048)
SQL> create tablespace test_tts datafile '/ora/ora815/oradata/V815/tts01.dbf'
size 100k;
Tablespace created.
SQL> create table test (c number) tablespace test_tts;
Table created.
SQL> insert into test values (1);
1 row created.
SQL> commit;
Commit complete.
SQL> execute sys.dbms_tts.transport_set_check('TEST_TTS',true);
PL/SQL procedure successfully completed.
SQL> SELECT * FROM sys.transport_set_violations;
no rows selected
2. Prepare the 8.1.5 tablespace to be transported to the 9i database
SQL> alter tablespace test_tts read only;
Tablespace altered.
3. Export the 8.1.5 tablespace for transportation
$ exp sys/manager transport_tablespace=y tablespaces=TEST_TTS
Export: Release 8.1.5.0.0 - Production on Thu May 3 13:46:11
2001
(c) Copyright 1999 Oracle Corporation. All rights reserved.
Connected to: Oracle8i Enterprise Edition Release 8.1.5.1.0 -
Production
With the Partitioning and Java options
PL/SQL Release 8.1.5.1.0 - Production
Export done in US7ASCII character set and US7ASCII NCHAR character
set
Note: table data (rows) will not be exported
About to export transportable tablespace metadata...
For tablespace TEST_TTS ...
. exporting cluster definitions
. exporting table definitions
. . exporting table
TEST
. exporting referential integrity constraints
. exporting triggers
. end transportable tablespace metadata export
Export terminated successfully without warnings.
4. Ftp the export dump file and datafiles to the 9i database server
5. Import the 8.1.5 tablespace into the 9i database (db_block_size=4096)
$ imp transport_tablespace=y tablespaces=TEST_TTS datafiles='/oradata/V900/V900/tts01.dbf'
Import: Release 9.0.0.0.0 - Beta on Thu May 3 14:37:07
2001
(c) Copyright 2001 Oracle Corporation. All rights
reserved.
Username: / as sysdba
Connected to: Oracle9i Enterprise Edition Release 9.0.0.0.0
- Beta
With the Partitioning option
JServer Release 9.0.0.0.0 - Beta
Export file created by EXPORT:V08.01.05 via conventional
path
About to import transportable tablespace(s) metadata...
import done in US7ASCII character set and AL16UTF16 NCHAR
character set
export server uses US7ASCII NCHAR character set (possible
ncharset conversion)
. importing SYS's objects into SYS
IMP-00017: following statement failed with ORACLE error
29339:
"BEGIN sys.dbms_plugts.beginImpTablespace('TEST_TTS',9,'SYS',1,0,2048,1,59"
"65974761350,1,121,5,5,0,50,1,0,0,3471198394,1,0,5965974761260,NULL,0,0,NULL"
",NULL); END;"
IMP-00003: ORACLE error 29339 encountered
ORA-29339: tablespace block size 2048 does not match configured
block sizes
ORA-06512: at "SYS.DBMS_PLUGTS", line 1420
ORA-06512: at line 1
IMP-00000: Import terminated unsuccessfully
$
6. The following settings must be prepared before plugging the tablespace
owning a block size different from the 9i database block size.
SQL> alter system set db_2k_cache_size=10m;
System altered.
$ imp transport_tablespace=y tablespaces=TEST_TTS datafiles='/oradata/V900/V900/tts01.dbf'
Import: Release 9.0.0.0.0 - Beta on Thu May 3 14:49:49
2001
(c) Copyright 2001 Oracle Corporation. All rights
reserved.
Username: / as sysdba
Connected to: Oracle9i Enterprise Edition Release 9.0.0.0.0
- Beta
With the Partitioning option
JServer Release 9.0.0.0.0 - Beta
Export file created by EXPORT:V08.01.05 via conventional
path
About to import transportable tablespace(s) metadata...
import done in US7ASCII character set and AL16UTF16 NCHAR
character set
export server uses US7ASCII NCHAR character set (possible
ncharset conversion)
. importing SYS's objects into SYS
. importing SYSTEM's objects into SYSTEM
. . importing table
"TEST"
Import terminated successfully without warnings.
$
SQL> select tablespace_name, block_size from dba_tablespaces;
TABLESPACE_NAME
BLOCK_SIZE
------------------------------ ----------
SYSTEM
4096
UNDOTBS
4096
EXAMPLE
4096
INDX
4096
TEMP
4096
TOOLS
4096
USERS
4096
TEST_TTS
2048
SQL> alter tablespace test_tts read write;
Tablespace altered.
SQL> select * from system.test;
C
----------
1
Plug an 9i tablespace of 8K block size into another 9i database of
4K db block size
1/ Creation of an 8K block size tablespace in the 9.0.0 database (db_block_size=4096)
SQL> create tablespace tts_8k
datafile '/export/home1/ora900/oradata/V900/tts.dbf'
size 200k blocksize 8k;
Tablespace created.
SQL> create table oe.test (c number) tablespace tts_8k;
Table created.
SQL> insert into oe.test values (1);
1 row created.
SQL> commit;
Commit complete.
2/ Prepare the 9.0.0 8K tablespace to be transported to another 9i database
SQL> alter tablespace tts_8k read only;
Tablespace altered.
3/ Export the 9.0.0 8K tablespace for transportation
$ NLS_LANG=american_america.WE8ISO8859P1
$ export NLS_LANG
$ exp transport_tablespace=y tablespaces=TTS_8K file=/tmp/djeunot/expdat.dmp
Export: Release 9.0.0.0.0 - Beta on Thu May 3 15:30:19
2001
(c) Copyright 2001 Oracle Corporation. All rights
reserved.
Username: / as sysdba
Connected to: Oracle9i Enterprise Edition Release 9.0.0.0.0
- Beta
With the Partitioning option
JServer Release 9.0.0.0.0 - Beta
Export done in WE8ISO8859P1 character set and AL16UTF16
NCHAR character set
Note: table data (rows) will not be exported
About to export transportable tablespace metadata...
For tablespace TTS_8K ...
. exporting cluster definitions
. exporting table definitions
. . exporting table
TEST
. exporting referential integrity constraints
. exporting triggers
. end transportable tablespace metadata export
Export terminated successfully without warnings.
$
4/ Ftp the export dump file and datafiles to the target 9i database server
5/ Import the 9.0.0 8K tablespace into the target 9i database (db_block_size=4096)
SQL> alter system set db_8k_cache_size =8m;
System altered.
$ NLS_LANG=american_america.WE8ISO8859P1
$ export NLS_LANG
$ imp transport_tablespace=y tablespaces=TTS_8K datafiles='/oradata/V900/V900/tts_8k.dbf'
file=/tmp/expdat.dmp
Import: Release 9.0.0.0.0 - Beta on Thu May 3 15:40:14
2001
(c) Copyright 2001 Oracle Corporation. All rights reserved.
Username: / as sysdba
Connected to: Oracle9i Enterprise Edition Release 9.0.0.0.0
- Beta
With the Partitioning option
JServer Release 9.0.0.0.0 - Beta
Export file created by EXPORT:V09.00.00 via conventional
path
About to import transportable tablespace(s) metadata...
import done in WE8ISO8859P1 character set and AL16UTF16
NCHAR character set
. importing SYS's objects into SYS
. importing OE's objects into OE
. . importing table
"TEST"
Import terminated successfully without warnings.
SQL> select tablespace_name, block_size from dba_tablespaces;
TABLESPACE_NAME
BLOCK_SIZE
------------------------------ ----------
SYSTEM
4096
UNDOTBS
4096
EXAMPLE
4096
INDX
4096
TEMP
4096
TOOLS
4096
USERS
4096
TEST_TTS
2048
TTS_8K
8192
SQL> select * from oe.test;
C
----------
11
Note
----
The restrictions that existed in 8i regarding character sets when transporting
tablespaces from one database to another still remain.
The target database must have the same character set as the source
database.
$ imp transport_tablespace=y tablespaces=TTS_8K datafiles='/oradata/V900/V900/tp
Import: Release 9.0.0.0.0 - Beta on Thu May 3 15:33:14 2001
(c) Copyright 2001 Oracle Corporation. All rights reserved.
Username: / as sysdba
Connected to: Oracle9i Enterprise Edition Release 9.0.0.0.0 - Beta
With the Partitioning option
JServer Release 9.0.0.0.0 - Beta
Export file created by EXPORT:V09.00.00 via conventional path
About to import transportable tablespace(s) metadata...
import done in US7ASCII character set and AL16UTF16 NCHAR character
set
. importing SYS's objects into SYS
IMP-00017: following statement failed with ORACLE error 29345:
"BEGIN sys.dbms_plugts.beginImport ('9.0.0.0.0',31,'2000',NULL,'NULL',3177"
"0,7851,1); END;"
IMP-00003: ORACLE error 29345 encountered
ORA-29345: can not plug a tablespace into a database using a different
charactet
ORA-06512: at "SYS.DBMS_PLUGTS", line 1714
ORA-06512: at "SYS.DBMS_PLUGTS", line 1553
ORA-06512: at line 1
IMP-00000: Import terminated unsuccessfully
$
Multi Table Insert
Multi Table Insert is a cool new feature that every DBA and developer
can appreciate. Until Oracle 8i, you had to run throught the data multiple
times or write procedural code to perform inserts into multiple tables
in one pass. 9i's Multi Table Insert feature provides a feature to insert
data into multiple tables in one pass.
The multitable insert feature allows the INSERT . . . SELECT statement
to use multiple tables as targets. In addition, it can distribute data
among target tables based on logical attributes of the new rows. Multitable
insert thus enables a single scan and transformation of source data to
insert data into multiple tables, sharply increasing performance.
Prior to Oracle9i, such processing could be done in two different ways.
It could be done through SQL with an independent INSERT . . . SELECT statement
for each table. This approach meant processing the same source data and
the transformation workload once for each target. If the SELECT statement
itself used a major transformation (which it often does, e.g., GROUP BY),
then the transformed data was either recomputed for each scan or materialized
beforehand in a temporary table to improve performance. Alternatively,
this type of work could be performed in procedural programs: every row
would be examined to determine how to handle the insertion.
Both of these techniques had significant drawbacks. If the insert was
performed through SQL, multiple scans and transforms of the source data
would hurt performance. Creating a temporary table to hold precomputed
transformation results could consume large amounts of disk space, and the
temporary table would still be scanned multiple times. The procedural programming
approach could not use the high-speed access paths directly available in
SQL, so it also faced performance handicaps.
Example
The following example statement inserts new customer information from
the customers_new table into two tables, customers and customers_special.
If a customer has a credit limit greater than 4500, the row is inserted
into customers_special. All customer rows are inserted into table customers.
The operation in Oracle8i could be implemented as follows:
INSERT
INTO customers_special (cust_id,cust_credit_limit)
SELECT cust_id, cust_credit_limit
FROM customers_new
WHERE cust_credit_limit >=4500;
INSERT
INTO customers
SELECT * FROM customers_new;
The same operation in Oracle9i could be implemented as follows:
INSERT FIRST
WHEN cust_credit_limit >=4500 THEN
INTO customers_special VALUES(cust_id,cust_credit_limit)
INTO customers
ELSE
INTO customers
SELECT * FROM customers_new;
Trial Recovery
Some problems that may occur during media recovery are not "recoverable".
For example, if a redo log was somehow corrupted and recovery could not
pass changes in the redo stream, the recovered database becomes useless.
This is known as "stuck recovery". When problems such as stuck
recovery occur, you have a difficult choice. If the block is relatively
unimportant, and if the problem is isolated, then it is better to corrupt
the block. But if the problem is not isolated, then it may be better to
open the database with the RESETLOGS option. This means that one
must restore the backup and recover the database again to an SCN before
the point where the corruption occurred. For a large database, restoring
a backup and recovering the database can take a long time.
To address this problem, the concept of Trial Recovery is introduced. Trial recovery applies redo in a way similar to normal media recovery, but it never writes its changes to disk, and it always rolls back its changes at the end of the test. Trial recovery is designed to allow a DBA to peek ahead in the redo stream to see if there are additional problems.
Trial Recovery provides database administrators the following enhancements so that almost all practical problems during media recovery are recoverable.
How Trial Recovery Works
By default, if a trial recovery encounters a stuck recovery or similar
problem, then it always marks the data block as corrupt in memory when
this action can allow recovery to proceed. Oracle writes errors generated
during trial recovery to alert files. Oracle clearly marks these errors
as test run errors.
Like normal media recovery, trial recovery can prompt you for archived
log filenames and ask you to apply them. Trial recovery ends when:
RMAN NEW
Recovery Manager automates backup and recovery by querying information
in the recovery catalog, the database's control file, and any datafiles
affected by the operations requested. Recovery Manager decides the most
efficient method of executing the requested backup, restore, or recovery
operation and then issues these steps to the Oracle server. Recovery Manager
and the server automatically identify modifications to the structure of
the database, and dynamically adjust the current operation to cater to
the changes.
KEY FEATURES OF RECOVERY MANAGER
- Automation of backup, restore and recovery operations
- Block media recovery
- Whole database backups or backups of any logical unit: control file,
datafile, tablespace or archive log
- Offline and Online backups
- Integrated with 3rd Party Media Management Vendors to backup to tape
- Two types of backup: image copies to disk or Recovery Manager backup
sets
- Intelligent management of the archived redo logs for both backup
and recovery
- Proxy Copy Backup Accelerator for fast copy technology at the storage
subsystem level
- Corrupt block detection
- Tablespace Point-in-Time Recovery support
- Ability to catalog on disk operating system backups
- Integration with Oracle Enterprise Manager's Backup Manager GUI
- Incremental backups at the Oracle block level for high performance
backup and recovery
- Omission of empty blocks during backup for optimization
- Recoverability redundancy when both incrementals and archiving are
used
- No extra redo is generated during online database backup
- Intelligent buffering with multiplexing of multiple input files for
high speed tape streaming
- Support for Oracle Real Application Clusters backup and recovery
- Ability to recover through unrecoverable operations, when using incremental
backups
- O/S-independent scripting language
Persistent RMAN Configurations
A CONFIGURE command has been introduced in Oracle 9i, that lets you
configure various features including automatic channels, parellelism, backup
options etc., that can persist across sessions and be available to any
session. For example, the CONFIGURE command may be used to store channel
configurations and RMAN will automatically assignt the channels as needed
without having to specify the channel allocation as a part of the script.
These automatic allocations and options can be overridden by commands in
an RMAN command file.
Control File Auto Backups
Many of the database administrators stayed away from using RMAN citing
the reliance of RMAN on control files / recovery catalog for backup and
recovery options. Now you need not worry about a lost recovery catalog
or a lost control file. Control File Auto Backups give the dba a way to
restore the backup repository contained in the control file when a control
file and the recovery catalog are both lost. For some of us using the nocatalog
option this comes in very handy when we have to recover without having
the control file that was used for the RMAN backup.
To use this feature, CONFIGURE CONTROLFILE AUTOBACKUP should be set to ON, and RMAN will automatically perform a control file autobackup after every BACKUP or COPY command is issued whether its in a RUN Block or at the command prompt. This control file backup occurs in addition to any control file backup that has been created as a part of the database backup.
Block Media Recovery
RMAN introduces a new blockrecover command to recovery individual datablocks
within a data file. This reduces the Mean Time to Recovery ( MTTR ) for
a large datafile when individual blocks within the file are reported as
corrupt. Without this feature, even if a single block is corrupted the
dba has to restore the whole datafile and recovery using the archived redo
logs that were generated since that backup was created.
This procedure is useful when a trace file or standard output reveals that a small number of blocks within a datafile are corrupt. A major restriction is that you can only recover from a full RMAN backup and incremental backups cannot be used for this type of recovery. Also only complete recovery can be performed on individual blocks cand you cannot stop the redo application while the recovery is in progress.
Block Media recovery can be performed by using a command at the RMAN command prompt similar to the one given below.
BLOCKRECOVER DATAFILE 6 BLOCK 23 DATAFILE 12 BLOCK 9;
Other Enhancements
To make the job of the dba more error free, Oracle enhance the CHANGE,
CROSSCHECK, DELETE and LIST commands. Also RMAN automatically switches
out the online redo log before backing up the archived redo logs to guarantee
that the backups can be restored to a consistent state. One cool feature
I like is the "NOT BACKED UP SINCE " clause that allows RMAN to backup
only those files that were not backed up after a specified time. So, if
a backup fails partway through, you can restart it and back up only those
files that were not previously backed up.
Self Managing
Rollbacks and Files
Oracle9i databases are capable of managing their own undo (Rollback)
segments -no longer will administrators need to carefully plan and tune
the number and sizes of rollback segments or bother about how to strategically
assign transactions to a particular rollback segment.
Oracle9i also allows administrators to allocate their undo space in
a single undo tablespace with the database taking care of issues such as
undo block contention, consistent read retention and space utilization.
Oracle9i also introduces the concept of "Oracle Managed Files" which
simplifies database administration by eliminating the need for administrators
to directly manage the files comprising an Oracle database.
Oracle9i now internally uses standard file system interfaces to create
and delete files as needed. While administrators still need to be involved
in space planning and administration, this feature automates the routine
task of creation and deletion of database files.
Oracle uses the standard operating system (OS) file system interfaces
internally to create and delete files as needed for tablespaces, online
logs and controlfiles. DBAs only need to specify the location of these
files using new initialization parameters. Oracle then ensures creation
of a file with a unique name and delete it when the corresponding object
is dropped. OMF reduces errors caused by administrators specifying incorrect
file names, reduces disk space wasted in obsolete files, and simplifies
creation of test and development databases. It also makes development of
portable third party applications easier since it eliminates the need to
put OS specific file names in SQL scripts.
While the parameter DB_CREATE_FILE_DEST specifies the default location
of datafiles, the parameter DB_CREATE_ONLINE_LOG_DEST_<n>, where n is
any integer between 1 and 5, decides the default location for copies of
online logs and controlfiles. If neither of the last two parameters are
set, all the files (datafiles, controlfiles and online logs) will be created
at the destination specified by the DB_CREATE_FILE_DEST parameter. Oracle
Managed datafiles, created by default, will be 100 MB in size and will
be auto extensible with unlimited maximum size. The default size of Oracle
Managed online logs will also be 100MB.
* Automatic Segment Free Space Management
Free space within a segment is maintained using bitmaps
Eliminates the necessity to tune parameters such as FREELISTS, FREELIST
GROUPS and PCTUSED
Improves space utilization especially for objects with rows of highly
varying sizes
Improves concurrency of INSERT operations
Better multi-instance behavior in terms of performance/space utilization
* Oracle Managed Files
Eliminates the need to directly manage the Oracle Database files
DBAs specify operation in terms of database objects; Oracle automatically
creates and deletes OS files as needed
Failed DDL operations automatically delete partially created underlying
files
Administrators can specify different locations for datafiles and
online redo log/control files
Redo logs and control files can be multiplexed across five destinations
Third Party Applications need not embed OS specific file names in
their scripts
* Default Temporary Tablespace
DBAs will now be able to assign a database wide default temporary
tablespace
Prevents the use of SYSTEM tablespace for storing temporary data
* Delete Datafiles
Datafiles can automatically be deleted when dropping a tablespace
Eliminates proliferation of obsolete files
External Tables
The External Table feature allows for flat files, which reside outside
the database, to be accessed just like relational tables within the database:
the flat-file data can be queried and joined to other tables using standard
SQL. Data access can be serial or parallel for maximum scalability. From
a users point of view, the main difference between an external table and
a regular table is that the external table is read-only.
To understand the value of external tables, consider that a data warehouse
always contains data from external data sources. Prior to Oracle9i, performing
complex transformations within Oracle required that the external data be
transformed using one of two possible strategies:
Performance Manager
Oracle Enterprise Manager Performance Pack provides an enhanced real-time
diagnostics interface with a single graphical Performance Overview for
"at-a-glance" Oracle performance assessment. The Overview chart includes
both host and database performance metrics organized by CPU, memory and
I/O resource usage. Top resources using sessions and database contention
are also displayed. Chart threshold flags display the status of
database and system performance allowing DBAs to immediately identify performance
exceptions. Trend arrows provide a quick check on whether performance
is improving or degrading.
The Diagnostics Pack also provides a set of pre-defined event tests,
known as Advanced Events, used to automatically monitor conditions and
detect problems for key elements of the Oracle environment. The Advanced
Events can be configured to check one or more conditions for a target system.
Thresholds, number of occurrences, as well as event notification
plans and corrective actions can be customized. In addition, with
version 9i, administrators can investigate an event directly from the Console's
event viewer through direct access to the Performance Manager charts, diagnostic
help and advice, and historical data related to metrics being monitored
by the event.
Automatic
SQL Execution Memory Management
The performance of complex long running queries, typical in a DSS environment,
depends to a large extent on the memory available in the Program Global
Area (PGA). In Oracle8i and previous releases of the database, administrators
sized the PGA by carefully adjusting a number of initialization parameters.
Oracle9i completely automates the management of PGA memory. Administrators
merely need to specify the maximum amount of PGA memory available to an
instance using a newly introduced parameter PGA_AGGREGATE_TARGET. The database
server automatically distributes this memory among various active queries
in an intelligent manner so as to ensure maximum performance benefits and
the most efficient utilization of memory. Furthermore, Oracle9i can adapt
itself to changing workload thus utilizing resources efficiently regardless
of the load on the system. The amount of the PGA memory available to an
instance can be dynamically changed by altering the value of the PGA_AGGREGATE_TARGET
parameter making it possible to add to and remove PGA memory from an active
instance online. In the automatic mode, administrators no longer have to
manually tune the sizes of the individual work areas using parameters such
as SORT_AREA_SIZE, HASH_AREA_SIZE, BITMAP_MERGE_AREA_SIZE and CREATE_BITMAP_AREA_SIZE.
The automatic SQL execution memory management feature is enabled by
setting the parameter WORK_AREA_SIZE to AUTO. For backward compatibility
reasons, Oracle9i continues to support the manual PGA management mode.
The manual mode can be activated by either setting the WORK_AREA_SIZE parameter
to MANUAL or not specifying a value for PGA_AGGREGATE_TARGET.
This feature will help DBAs reduce the time required to tune the memory
usage for their DSS applications. The saving gained from the improved use
of memory should translate to better throughput where large number of users
are on the system, as well as improved response time for queries.
First-N rows Optimization
With first-N rows optimization, Oracle queries give the best possible
response time for the first rows of a result set. Quicker response with
the first N rows enhances user satisfaction in many applications, since
users receive their initial set of data more rapidly.
By default Oracle optimizes queries to maximize throughput. High throughput
means that Oracle chooses the execution plan needing the least resources
to process all rows returned by the query. While this approach is valid
for many applications, optimizing queries for fast initial response time
has become increasingly important. Highly interactive applications for
end users need to provide the first few rows of results as quickly as possible.
Examples of such applications include web-based or form-based query systems
for business intelligence. Users of such systems often issue database requests
that can produce hundreds or thousands of rows, but the users only need
to view a few rows at a time.
Consider a user who issues a query and waits a long time for the response,
only to find that the query returned 1000 rows. In practice, the user may
only wish to examine a few tens of rows at once. Having all the rows presented
at the same time gives the user no special benefit. Therefore, the user
will be more satisfied if the application presents the first few rows as
quickly as possible and reduces his waiting time. While the user studies
the first rows of results, the application can continue work on the remaining
data.
The first-N rows optimization instructs the query optimizer to choose
a query execution plan that minimizes the response time to produce the
first N rows of query results. The value of N is specified by the application
developer and depends on the usage pattern of the application. First-N
rows optimization in Oracle9i is a significant enhancement of the first
rows optimization found in prior releases. While the earlier optimization
used a mix of internal rules and costs to produce a query plan, Oracle9i's
first-N rows optimization is fully cost-based.
This first-N rows optimization can be enabled in three different ways.
In each case shown below, the value for N is the number of rows for which
to optimize response time;
OPTIMIZER_MODE initialization parameter
The user would include the following in the initialization file:
OPTIMIZER_MODE = FIRST_ROWS_N
OPTIMIZER_MODE session parameter.
The user would issue the following statement:
ALTER SESSION SET OPTIMIZER_MODE = FIRST_ROWS_N
If the session setting is used, it will override any initialization
settings.
OPTIMIZER hint.
The query text would include the following hint:
SELECT /*+ FIRST_ROWS(N) */ ....
If the hint is used, it will override any session and initialization
settings.
The enhanced first-N rows optimization available in Oracle9i enables
major performance improvements for all types of database schemas and queries.
SQL Aggregation Enhancements
In Oracle8i, aggregation processing was enhanced with the addition
of the CUBE and ROLLUP operators. These operators, extensions to SQL's
"GROUP BY" clause, enable a single SQL query to calculate certain important
sets of aggregation groups efficiently. Oracle9i extends SQL aggregation
capabilities further with grouping sets, concatenated grouping sets and
composite columns. These features enable SQL queries to specify the exact
levels of aggregation needed, simplifying development and significantly
enhancing query performance.
Grouping Sets
Grouping sets enable a single query to list multiple sets of columns
which will be grouped when calculating aggregates such as sums. Grouping
sets are specified by following the GROUP BY keyword with the term "GROUPING
SETS" and a column specification list. Here is an example:
SELECT year, region, product, sum(sales)
FROM salesTable
GROUP BY
GROUPING SETS ((year, region, product),
(year, product), (region,
product));
The SQL above calculates aggregates over exactly 3 groupings: (year, region, product), (year, product), and (region, product). Without grouping sets, the query would need to be specified with either a CUBE operator or a UNION ALL query. Both of these alternative approaches have significant shortcomings:
The SQL above defines the following groupings:
(month, region), (month, country), (year,
region) and (year, country)
An important use for concatenated grouping sets is to generate the aggregates needed for OLAP data cubes. OLAP data cubes require aggregations along the rollup hierarchies of multiple dimensions, and this can be specified concisely by concatenating ROLLUP operators. The Oracle9i OLAP option takes advantage of this new feature for high performance processing.
Composite Columns in GROUP-BY Expressions
Oracle9i also enables the use of composite columns within GROUP BY
expressions. A composite column is a collection of columns, specified within
parentheses, that is treated as a single unit during the computation of
groupings. For example, by enclosing a subset of the columns in a ROLLUP
list within parentheses, some levels will be skipped during ROLLUP. Here
is a simple case where (quarter, month) is a composite column:
GROUP BY ROLLUP (year, (quarter, month), day)
Because of its composite column of (quarter, month), the SQL above never
separates the quarter and month columns in its ROLLUP. This means that
it never shows a rollup at the (year, quarter) level and thus skips the
quarter-level aggregates. Instead the query returns the following groupings:
(year, quarter, month, day), (year, quarter,
month), (year), ()
As with grouping sets, precise specification of the groupings needed
means simpler development and faster performance.
Oracle9i's new SQL aggregation features of grouping sets, concatenated
grouping sets and composite columns provide high efficiency aggregations
for the full range of business intelligence tasks. Oracle products such
as the Oracle9i OLAP option are already taking advantage of the features
to optimize performance, and users across the spectrum of business intelligence
processing will benefits from these features. The SQL aggregation enhancements
are just one aspect of Oracle9i business intelligence functionality enabling
more productive development and faster query performance.
Improving
Query Performance with the SQL WITH Clause
Oracle9i significantly enhances both the functionality and performance
of SQL to address the requirements of business intelligence queries. The
SELECT statement¿s WITH clause, introduced in Oracle9i, provides
powerful new syntax for enhancing query performance. It optimizes query
speed by eliminating redundant processing in complex queries.
Consider a lengthy query which has multiple references to a single
subquery block. Processing subquery blocks can be costly, so recomputing
a block every time it is referenced in the SELECT statement is highly inefficient.
The WITH clause enables a SELECT statement to define the subquery block
at the start of the query, process the block just once, label the results,
and then refer to the results multiple times.
The WITH clause, formally known as the subquery factoring clause, is
part of the SQL-99 standard. The clause precedes the SELECT statement of
a query and starts with the keyword ¿WITH.¿ The WITH is followed
by the subquery definition and a label for the result set. The query below
shows a basic example of the clause:
WITH channel_summary AS
( SELECT channels.channel_desc,
SUM(amount_sold) AS channel_total
FROM sales, channels
WHERE sales.channel_id = channels.channel_id
GROUP BY channels.channel_desc )
SELECT channel_desc, channel_total
FROM channel_summary
WHERE channel_total >
( SELECT SUM(channel_total) * 1/3
FROM channel_summary );
This query uses the WITH clause to calculate the sum of sales for each
sales channel and label the results as channel_summary. Then it checks
each channel's sales total to see if any channel's sales are greater than
one third of the total sales. By using the new clause, the channel_summary
data is calculated just once, avoiding an extra scan through the large
sales table.
Although the primary purpose of the WITH clause is performance improvement,
it also makes queries easier to read, write and maintain. Rather than duplicating
a large block repeatedly through a SELECT statement, the block is localized
at the very start of the query. Note that the clause can define multiple
subquery blocks at the start of a SELECT statement: when several blocks
are defined at the start, the query text is greatly simplified and its
speed vastly improved.
Index Skip Scans
"Index skip scans" are a new performance feature in Oracle9i which
can provide significant benefits to any database application which uses
composite indexes. In releases prior to Oracle9i, a composite index would
only be used for a given query if either the leading column (or columns)
of the index was included in the WHERE-clause of the query, or (less often)
the entire index was scanned.
With Oracle9i, a composite index can be used even if the leading column(s)
are not accessed by the query, via a technique called an 'index skip scan'.
During a skip scan, the composite index is accessed once for each distinct
value of the leading column(s). For each distinct value, the index is searched
to find the query's target values. The result is a scan that skips across
the index structure.
Index skip scans provide two main benefits. First, index skip scans
can improve the performance of certain queries, since queries which previously
required table scans may now be able to take advantage of an existing index.
Second, index skip scans may allow an application to meet its performance
goals with fewer indexes; fewer indexes will require less storage space
and may improve the performance of DML and maintenance operations.
To illustrate index skip scans, suppose that we have a 2.5-million-row
table containing information on automobile registrations:
REGISTERED_OWNERS
(
STATE VARCHAR2(2) NOT NULL,
REGISTRATION# VARCHAR2(10) NOT NULL,
FIRST_NAME VARCHAR2(30),
LAST_NAME VARCHAR2(30),
MAKE VARCHAR2(30),
MODEL VARCHAR2(15),
YEAR_MFG NUMBER,
CITY VARCHAR2(30),
ZIP NUMBER
)
Furthermore, suppose that this table has a composite index on (STATE,
REGISTRATION#).
The following query will be able to take advantage of the index using
index skip scans:
SELECT first_name, last_name, zip
FROM REGISTERED_OWNERS WHERE registration# = '4FBB428'
By using the index, the query runs almost ninety times faster. On Oracle8i
(without index skip scans), this query requires 21.42 seconds. Running
on the same hardware and using Oracle9i's index skip scans, this query
executes in only .24 seconds.
Index skip scans can be extremely useful, and they are transparent
to users and applications. This feature can improve performance of many
database workloads without requiring the addition of any new indexes (and
in some cases may even allow indexes to be removed without significantly
impacting performance).
More Changes
With the introduction of default temporary tablespace, the SYSTEM tablespace
is no longer used as the default storage location for temporary data. Oracle9i
also allows for better control over database downtime by enabling administrators
to specify the mean time to recover (MTTR) from system failures in number
of seconds. This feature coupled with more dynamic initialization parameters
help administrators further improve database availability.
Oracle9i also supports databases created with multiple block sizes, and allows administrators to configure corresponding 'sub caches' within the buffer cache for each block size. This capability allows administrators to place objects in tablespaces of appropriate block size in order to improve I/O performance, and also allows tablespaces to be transported between different databases, for example, from an OLTP environment to a Data Warehousing environment.
Execution plans are cached in Oracle9i , allow administrators to investigate reported performance problems without needing to re-execute the offending queries.
To ease backup and recovery operations, Recovery Manager in Oracle9i
enables one time backup configuration, automatic management of backups
and archived logs based on a user specified recovery window, restartable
backups and restores, and test restore/recovery. Recovery Manager implements
a recovery window, a new policy to control when backups expire. This allows
administrators to establish a period of time during which it will be possible
to discover logical errors and fix the affected objects by doing a database
or tablespace point-in-time recovery.
Recovery Manager will also automatically expire backups that are no
longer required to restore the database to a point-in-time within the recovery
window. These features are designed to reduce the time and effort spent
by administrators
in performing routine backup tasks by providing for automation for
most commonly performed tasks. The new controlfile autobackup feature also
allows for restoring/recovering a database even when a Recovery Manager
repository is not available. Recovery Manger in Oracle9i also features
enhanced reporting, a more user friendly interface and the ability to specify
a highly flexible backup configuration to suit varied requirements depending
on the nature of database and business needs.
Recovery
+ More fault tolerance
+ Ignore corruption and carry on
+ Back out of botched session
+ Selective skipping of blocks
+ Only those that apply
+ Online table re-org
+ Rollback managed in the tablespace
+ Standby database failover and failback
+ Different block size by tablespace
Bitmap join indexes
Improved performance
More choices during join operation
Materialized views
More sophisticated query rewite
Fast refresh opportunities
Say goodbye
Server Manager
bstat/estat
Connect internal
Traditional temporary tablespaces
Incremental export
ANALYZE
LONG data type
NCHAR
Rollback segment management
Initialization parameters: ROLLBACK_SEGMENTS, LOG_CHECKPOINT_INTERVAL,
DB_BLOCK_BUFFERS