Memory Tuning
The total available memory on a system should be configured in such
a manner, that all components of the system function at optimum levels.
The following is a rule-of-thumb breakdown to help assist in memory allocation
for the various components in a system with an Oracle back-end.
SYSTEM COMPONENT | ALLOCATED % OF MEMORY |
Oracle SGA Components | ~ 50% |
Operating System +Related Components | ~15% |
User Memory | ~ 35% |
The following is a rule-of-thumb breakdown of the ~50% of memory that
is allocated for an Oracle SGA. These are good starting numbers and will
potentially require fine-tuning, when the nature and access patterns of the
application is determined.
ORACLE SGA COMPONENT | ALLOCATED % OF MEMORY |
Database Buffer Cache | ~80% |
Shared Pool Area | ~12% |
Fixed Size + Misc | ~1% |
Redo Log Buffer | ~0.1% |
The following is an example to illustrate the above guidelines. In the
following example, it is assumed that the system is configured with 2 GB
of memory, with an average of 100 concurrent sessions at any given time.
The application requires response times within a few seconds and is mainly
transactional. But it does support batch reports at regular intervals.
SYSTEM COMPONENT | ALLOCATED MEMORY(IN MB) |
Oracle SGA Components | ~1024 |
Operating System +Related Components | ~306 |
User Memory | ~694 |
In the aforementioned breakdown, approximately 694MB of memory will be
available for Program Global Areas (PGA) of all Oracle Server processes.
Again, assuming 100 concurrent sessions, the average memory consumption
for a given PGA should not exceed ~7MB. It should be noted that SORT_AREA_SIZE
is part of the PGA.
ORACLE SGA COMPONENT | ALLOCATED MEMORY(IN MB) |
Database Buffer Cache | ~800 |
Shared Pool Area | ~128 - 188 |
Fixed Size + Misc | ~ 8 |
Redo Log Buffer | ~ 1 (average size 512K) |
Top Oracle Init.ora’s Parameters
Reference for setting DB_BLOCK_LRU_LATCHES parameter
Default value: 1/2 the # of CPU's
MAX Value: Min 1, Max about 6 * max(#cpu's,#processor groups)
1)Oracle has found that a optimal value for this would be 2 X #CPU's and would recommend testing at this level.
2)Also setting this parameter to a multiple of #CPU's is important for Oracle to properly allocate and utilize working sets.
3)This value is hard coded in 9i
**IMPORTANT**
Increasing this parameter greater than 2 X #CPU's may have a negative impact on the system.
FREQUENTLY ASKED QUESTIONS
You have just upgraded to 8.0 or 8.1 and have found that there are 2 new parameters regarding DBWR. You are wondering what the differences are and which one you should use.
DBWR_IO_SLAVES
In Oracle7, the multiple DBWR processes were simple slave processes; i.e., unable to perform async I/O calls. In Oracle80, true asynchronous I/O is provided to the slave processes, if available. This feature is implemented via the init.ora parameter dbwr_io_slaves. With dbwr_io_slaves, there is still a master DBWR process and its slave processes. This feature is very similar to the db_writers in Oracle7, except the IO slaves are now capable of asynchronous I/O on systems that provide native async I/O, thus allowing for much better throughput as slaves are not blocked after the I/O call. I/O slaves for DBWR are allocated immediately following database open when the first I/O request is made.
DB_WRITER_PROCESSES
Multiple database writers is implemented via the init.ora parameter db_writer_processes. This feature was enabled in Oracle8.0.4, and allows true database writers; i.e., no master-slave relationship. With Oracle8 db_writer_processes, each writer process is assigned to a LRU latch set. Thus, it is recommended to set db_writer_processes equal to the number of LRU latches (db_block_lru_latches) and not exceed the number of CPUs on the system. For example, if db_writer_processes was set to four and db_lru_latches=4, then each writer process will manage its corresponding set.
Things to know and watch out for....
1. Multiple DBWRs and DBWR IO slaves cannot coexist. If both are enabled, then the following error message is produced: ksdwra("Cannot start multiple dbwrs when using I/O slaves.\n"); Moreover, if both parameters are enabled, dbwr_io_slaves will take precedence.
2. The number of DBWRs cannot exceed the number of db_block_lru_latches. If it does, then the number of DBWRs will be minimized to equal the number of db_block_lru_latches and the following message is produced in the alert.log during startup: ("Cannot start more dbwrs than db_block_lru_latches.\n"); However, the number of lru latches can exceed the number of DBWRs.
3. dbwr_io_slaves are not restricted to the db_block_lru_latches; i.e., dbwr_io_slaves >= db_block_lru_latches.
Should you use DB_WRITER_PROCESSES or DBWR_IO_SLAVES?
Although both implementations of DBWR processes may be beneficial, the general rule, on which option to use, depends on the following :
1) the amount write activity;
2) the number of CPUs (the number of CPUs is also indirectly related to the number LRU latch sets);
3) the size of the buffer cache;
4) the availability of asynchronous I/O (from the OS).
There is NOT a definite answer to this question but here are some considerations to have when making your choice. Please note that it is recommended to try BOTH (not simultaneously) against your system to determine which best fits the environment.
-- If the buffer cache is very large (100,000 buffers and up) and the application is write intensive, then db_writer_processes may be beneficial. Note, the number of writer processes should not exceed the number of CPUs.
-- If the application is not very write intensive (or even a DSS system) and async I/O is available, then consider a single DBWR writer process; If async I/O is not available then use dbwr_io_slaves.
-- If the system is a uniprocessor(1 CPU) then implement may want to use dbwr_io_slaves.
Implementing db_io_slaves or db_writer_processes comes with some overhead cost. Multiple writer processes and IO slaves are advanced features, meant for high IO throughput. Implement this feature only if the database environment requires such IO throughput. In some cases, it may be acceptable to disable I/O slaves and run with a single DBWR process.
Other Ways to Tune DBWR Processes
It can be easily seen that reducing buffer operations will be a direct benefit to DBWR and also help overall database performance. Buffer operations can be reduced by:
1) using dedicated temporary tablespaces
2) direct sort reads
3) direct Sqlloads
4) performing direct exports.
In addition, keeping a high buffer cache hit ratio will be extremely beneficial not only to the response time of applications, but the DBWR as well.
*DATA DICTIONARY cache miss ratio (Goal > 90%, increase
SHARED_POOL)
Contains:
Preparsed database procedures
Preparsed database triggers
Recently parsed SQL & PL/SQL requests
This is the memory allocated for the library and data dictionary cache
select sum(gets) Gets, sum(getmisses) Misses,
(1 - (sum(getmisses) / (sum(gets)
+
sum(getmisses))))*100 HitRatio
from v$rowcache;
* El HIT RATIO del SHARED_POOL_SIZE (LIBRARY CACHE hit ratio)
debe ser superior al 99%
column namespace
heading "Library Object"
column gets
format 9,999,999 heading "Gets"
column gethitratio format 999.99
heading "Get Hit%"
column pins
format 9,999,999 heading "Pins"
column pinhitratio format 999.99
heading "Pin Hit%"
column reloads
format 99,999 heading "Reloads"
column invalidations format 99,999
heading "Invalid"
column db format a10
set pages 58 lines 80
select namespace, gets, gethitratio*100 gethitratio,
pins, pinhitratio*100 pinhitratio, RELOADS, INVALIDATIONS
from v$librarycache
/
If all Get Hit% (gethitratio in the view) except for indexes are greater than 80-90 percent, this is the desired state; the value for indexes is low because of the few accesses of that type of object. Notice that the Pin Hit% should ve also greater than 90% (except for indexes). The other goals of tuning this area are to reduce reloads to as small a value as possible (this is done by proper sizing and pinning) and to reduce invalidations. Invalidations happen when for one reason or another an object becomes unusable.
Guideline: In a system where there is no flushing increase the shared pool size in 20% increments to reduce reloads and invalidations and increase hit ratios.
select sum(pins) Executions, sum(pinhits) Execution_Hits,
((sum(pinhits) / sum(pins))
* 100) phitrat,
sum(reloads) Misses,
((sum(pins) / (sum(pins)
+ sum(reloads))) * 100) RELOAD_hitrat
from v$librarycache;
* How much memory is left for SHARED_POOL_SIZE
col value for 999,999,999,999 heading "Shared Pool Size"
col bytes for 999,999,999,999 heading "Free Bytes"
select to_number(v$parameter.value) value, v$sgastat.bytes,
(v$sgastat.bytes/v$parameter.value)*100
"Percent Free"
from v$sgastat, v$parameter
where v$sgastat.name = 'free memory'
and v$parameter .name = 'shared_pool_size';
A better query:
select sum(ksmchsiz) Bytes, ksmchcls Status
from SYS.x$ksmsp
group by ksmchcls;
If there is free memory then there is no need to increase this parameter.
* Identifying objects reloaded into the SHARED POOL again and
again
select substr(owner,1,10) owner,substr(name,1,25) name, substr(type,1,15)
type, loads, sharable_mem
from v$db_object_cache
-- where owner not in ('SYS','SYSTEM') and
where loads > 1 and type in ('PACKAGE','PACKAGE
BODY','FUNCTION','PROCEDURE')
order by loads DESC;
* Large Objects NOT 'pinned' in Shared Pool
To determine what large PL/SQL objects are currently loaded in the shared
pool and are not marked 'kept' (NOT pinned) and therefore may causing a
problem, execute the following query:
select name, sharable_mem
from v$db_object_cache
where sharable_mem > 10000
and (type = 'PACKAGE' or type = 'PACKAGE BODY' or type = 'FUNCTION'
or type = 'PROCEDURE')
and kept = 'NO';
* El tablespace USERS (o el que le ponga por defecto a los usuarios), debe ser creado con default storage bajos, por ej. initial 100K next 100K
* Los valores aconsejados por Oracle sobre Small, Medium o Large se
refieren a la cantidad de usuarios que acceden, no al tamaño de la
BD. Small es de 1-11, Medium es de 12-25 o 30.
INSTANCE TUNING
1) Library Cache Hit Ratio:
In the most basic terms, the library cache is a memory structure that
holds the parsed (ie. already examined to determine syntax correctness,
security privileges, execution plan, etc.) versions of SQL statements that
have been executed at least once. As new SQL statements arrive, older SQL
statements will be pushed from the memory structure to provide space for
the new statements. If the older SQL statements need to be re-executed,
they will now have to be re-parsed. Also, a SQL statement that is not exactly
the same as an already parsed statement (including even capitalization)
will be reparsed even though it may perform the exact same operation. Parsing
is an expensive operation, so the objective is to make the memory structure
large enough to hold enough parsed SQL statements to avoid a large percentage
of re-parsing.
Target: 99% or greater.
Value: SELECT (1 - SUM(reloads)/SUM(pins)) FROM v$librarycache;
Correction: Increase the SHARED_POOL_SIZE parameter (in bytes)
in the INIT.ORA file.
2) Dictionary Cache Hit Ratio:
The dictionary cache is the memory structure that holds the most recently
used contents of ORACLE's data dictionary, such as security privileges,
table structures, column data types, etc. This data dictionary information
is necessary for each and every parsing of a SQL statement. Recalling that
memory is around 300 times faster than disk, it is needless to say that
performance is improved by holding enough data dictionary information in
memory to significantly minimize disk accesses.
Target: 90%
Value: SELECT (1 - SUM(getmisses)/SUM(gets)) FROM v$rowcache;
Correction: Increase the SHARED_POOL_SIZE parameter (in bytes)
in the INIT.ORA file.
3) Buffer Cache Hit Ratio:
The buffer cache is the memory structure that holds the most recently
used blocks read from disk, whether table, index, or other segment type.
As new data is read into the buffer cache, data that hasn't been recently
used is pushed out. Again recalling that memory is approximately 300 times
faster than disk, the objective is to hold enough data in memory to minimize
disk accesses. Note that data read from tables through the use of indexes
is held in the buffer cache much longer than data read via full-table scans.
Target: 90% (although some shops find 80% or even 70% acceptable)
Value:
SELECT value FROM v$sysstat WHERE name = 'consistent gets';
SELECT value FROM v$sysstat WHERE name = 'db block gets';
SELECT value FROM v$sysstat WHERE name = 'physical reads';
Buffer cache hit ratio = 1 - physical reads/(consistent gets + db block
gets)
Correction: Increase the DB_BLOCK_BUFFERS parameter (in db
blocks) in the INIT.ORA file.
Other notes:
- Compare the values for "table scans" and "table access by rowid" in
the v$sysstat table to gain general insight into whether additional indexing
is needed. Tuning specific applications via indexing will increase the "table
access by rowid" value (ie. tables read through the use of indexes) and
decrease the "table scans" values. This effect tends to improve the buffer
cache hit ratio since a smaller volume of data is read into the buffer cache
from disk, so less previously cached data is pushed out. (See the article
on application tuning for more details regarding indexing.)
- A low buffer cache hit ratio can very quickly lead to an I/O bound
situation, as more reads are required per period of time to provide the
requested data. When the reads/time period exceed the workload supported
by the disk subsystem, exponential performance degradations can occur. (Please
see the section on Operating System tuning.)
- Since the buffer cache will typically be the largest memory structure
allocated in the ORACLE instance, it is the structure most likely to contribute
to O/S paging. If the buffer cache is sized such that the hit ratio is
90%, but excessive paging occurs at this setting, performance may be better
if the buffer cache were sized to achieve an 85% hit ratio. Careful analysis
is necessary to balance the buffer cache hit ratio with the O/S paging
rate.
4) Sort Area Hit Ratio:
Sorts that are too large to be performed in memory are written to disk.
Once again, memory is about 300 times faster than disk, so for instances
where a large volume of sorting occurs (such as decision support systems
or data warehouses), sorting on disk can degrade performance. The objective,
of course, is to allow a significant percentage of sorts to occur in memory.
Target: 90% (although many shops find 80% or less acceptable)
Value:
SELECT value FROM v$sysstat WHERE name = 'sorts (memory)';
SELECT value FROM v$sysstat WHERE name = 'sorts (disk)';
Sort area hit ratio = 1 - disk sorts/(memory sorts + disk sorts);
Correction: Increase the SORT_AREA_SIZE parameter (in bytes)
in the INIT.ORA file.
Other notes:
- With release 7.3 and above, setting the SORT_DIRECT_WRITES = TRUE
initialization parameter causes sorts to disk to bypass the buffer cache,
thus improving the buffer cache hit ratio.
- As with buffer cache hit ratio, examine the values for "table scans"
and "table access by rowid" in the v$sysstat table to determine if additional
indexing is needed. In some cases, the optimizer will choose to retrieve
the rows in the correct order by using the index, thus avoiding a sort.
In other cases, retrieval by index rather than full-table scan tends to
collect a smaller quantity of rows to be sorted, thus increasing the probability
that the sort can occur in memory, which also tends to improve the sort
area hit ratio.
- Also, as with buffer cache hit ratio, sort area size (if very large)
can contribute to O/S paging. In general, sorting on disk should be favored
over excessive paging, as paging effects all memory structures (ORACLE and
non-ORACLE) while sorting on disk only effects sorts performed by the ORACLE
instance.
5) Redo Log Space Requests:
Redo logs (and archive logs if the ORACLE instance is run in ARCHIVELOG
mode) are transaction logs involving a variety of structures. The redo
log buffer is a memory structure into which changes are recorded as they
are applied to blocks in the buffer cache (including data, index, rollback
segments, etc.). Committed changes are synchronously flushed to redo log
file members on disk, while uncommited changes are asynchronously written
to redo log files. (This approach makes perfect sense on inspection. If
an instance crash occurs, commited changes are already written to the redo
logs on disk and are applied during instance recovery. Uncommited changes
in the redo log buffer not yet written to disk are lost, and any uncommited
changes that have been written to disk are rolled-back during instance recovery.)
A session performing an update and an immediate commit will not return until
the committed change has been written to the redo log buffer and flushed
to the redo log files on disk. Redo log groups are written to in a round-robin
manner. When the mirrored members of a redo log group become full, a log
switch occurs, thus archiving one member of the redo log group (if ARCHIVELOG
mode is TRUE), then clearing the members of that redo log group. Note that
a checkpoint also occurs at least on each redo log switch. In most basic
form, the redo log buffer should be large enough that no waits for available
space in the memory structure occur while changes are written to redo log
files. The redo log file size should be large enough that the redo log buffer
does not fill during a redo log switch. Finally, there should be enough redo
log groups that the archiving and clearing of filled redo logs does not cause
waits for redo log switches, thus causing the redo log buffer to fill. The
inability to write changes to the redo log buffer because it is full is reported
as redo log space requests in the v$sysstat table.
Target: 0
Value: SELECT value FROM v$sysstat WHERE name = 'redo log space
requests';
Correction:
- Increase the LOG_BUFFER parameter (in bytes) in the INIT.ORA file.
- Increase the redo log size.
- Increase the number of redo log groups.
Other notes:
- The default configuration of small redo log size and two redo log
groups is seldom sufficient. Between 4 and 10 groups typically yields adequate
results, depending on the particular archive log destination (whether a single
disk, RAID array, or tape). Size will be very dependent upon the specific
application characteristics and throughput requirements, and can range from
less than 10 Mb to 500 Mb or greater.
- Since redo log sizes and groups can be changed without a shutdown/restart
of the instance, increasing the redo log size and number of groups is
typically the best area to start tuning for reduction of redo log space
requests. If increasing the redo log size and number of groups appears to
have little impact on redo log space requests, then increase the LOG_BUFFER
initialization parameter.
6) Redo Buffer Latch Miss Ratio:
One of the two types of memory structure locking mechanisms used by
an ORACLE instance is the latch. A latch is a locking mechanism that is
implemented entirely within the executable code of the instance (as opposed
to an enqueue, see below). Latch mechanisms most likely to suffer from
contention involve requests to write data into the redo log buffer. To
serve the intended purpose, writes to the redo log buffer must be serialized
(ie. one process locks the buffer, writes to it, then unlocks it, a second
process locks, writes, and unlocks, etc., while other processes wait for
their chance to acquire these same locks). There are four different groupings
applicable to redo buffer latches: redo allocation latches and redo copy
latches, each with immediate and willing-to-wait priorities. Redo allocation
latches are acquired by small redo entries (having an entry size smaller
than or equal to the LOG_SMALL_ENTRY_MAX_SIZE initialization parameter)
and utilize only a single CPU's resources for execution. Redo copy latches
are requested by larger redo entries (entry size larger than the LOG_SMALL_ENTRY_MAX_SIZE),
and take advantage of multiple CPU's for execution. Recall from above that
committed changes are synchronously written to redo logs on disk: these entries
require an immediate latch of the appropriate type. Uncommitted changes are
asynchronously written to redo log files, thus they attempt to acquire a willing-to-wait
latch of the appropriate type. Below, each category of redo buffer latch
will be considered seperately.
- Redo allocation immediate and willing-to-wait latches:
Target: 1% or less
Value (immediate):
SELECT a.immediate_misses/(a.immediate_gets + a.immediate_misses + 0.000001)
FROM v$latch a, v$latchname b
WHERE b.name = 'redo allocation' AND b.latch# = a.latch#;
Value (willing-to-wait):
SELECT a.misses/(a.gets + 0.000001)
FROM v$latch a, v$latchname b
WHERE b.name = 'redo allocation' AND b.latch# = a.latch#;
Correction: Decrease the LOG_SMALL_ENTRY_MAX_SIZE parameter
in the INIT.ORA file.
Other notes:
- By making the max size for a redo allocation latch smaller, more redo
log buffer writes qualify for a redo copy latch instead, thus better utilizing
multiple CPU's for the redo log buffer writes. Even though memory structure
manipulation times are measured in nanoseconds, a larger write still takes
longer than a smaller write. If the size for remaining writes done via redo
allocation latches is small enough, they can be completed with little or
no redo allocation latch contention.
- On a single CPU node, all log buffer writes are done via redo allocation
latches. If log buffer latches are a significant bottleneck, performance
can benefit from additional CPU's (thus enabling redo copy latches) even
if the CPU utilization is not an O/S level bottleneck.
- In the SELECT statements above, an extremely small value is added
to the divisor to eliminate potential divide-by-zero errors.
- Redo copy immediate and willing-to-wait latches:
Target: 1% or less
Value (immediate):
SELECT a.immediate_misses/(a.immediate_gets + a.immediate_misses + 0.000001)
FROM v$latch a, v$latchname b
WHERE b.name = 'redo copy' AND b.latch# = a.latch#;
Value (willing-to-wait):
SELECT a.misses/(a.gets + 0.000001)
FROM v$latch a, v$latchname b
WHERE b.name = 'redo copy' AND b.latch# = a.latch#;
Correction: Increase the LOG_SIMULTANEOUS_COPIES parameter
in the INIT.ORA file.
Other Notes:
- Essentially, this initialization parameter is the number of redo copy
latches available. It defaults to the number of CPU's (assuming a multiple
CPU node). Oracle Corporation recommends setting it as large as 2 times
the number of CPU's on the particular node, although quite a bit of experimentation
may be required to get the value adjusted in a suitable manner for any
particular instance's workload. Depending on CPU capability and utilization,
it may be beneficial to set this initialization parameter smaller or larger
than 2 X #CPU's.
- Recall that the assignment of log buffer writes to either redo allocation
latches or redo copy latches is controlled by the maximum log buffer write
size allowed for a redo allocation latch, and is specified in the LOG_SMALL_ENTRY_MAX_SIZE
initialization parameter. Recall also that redo copy latches apply only
to multiple CPU hosts.
7) Enqueue Waits:
The second of the two types of memory structure locking mechanisms used
by an ORACLE instance is the enqueue. As opposed to a latch, an enqueue
is a lock implemented through the use of an operating system call, rather
than entirely within the Instance's executable code. Exactly what operations
use locks via enqueues is not made sufficiently clear from any Oracle documentation
(or at least none that the author has seen), but the fact that enqueues
waits do degrade instance performance is reasonably clear. Luckily, tuning
enqueues is very straight-forward.
Target: 0
Value: SELECT value FROM v$sysstat WHERE name = 'enqueue waits';
Correction: Increase the ENQUEUE_RESOURCES parameter in the
INIT.ORA file.
8) Checkpoint Contention:
A checkpoint is the process of flushing all changed data blocks (table,
index, rollback segments, etc.) held in the buffer cache to their corresponding
datafiles on disk. This process occurs during each redo log switch, each
time the number of database blocks specified in the LOG_CHECKPOINT_INTERVAL
initialization parameter is reached, and each time the number of seconds
specified in the LOG_CHECKPOINT_TIMEOUT is reached. (Also, checkpoints occur
during a NORMAL or IMMEDIATE SHUTDOWN, when a tablespace is placed in BACKUP
mode, or when an ALTER SYSTEM CHECKPOINT is manually issued, but these occurrences
are usually outside the scope of normal daytime operation.) Depending on
the number of changed blocks in the buffer cache, a checkpoint can take considerable
time to complete. Since this process is essentially done asynchronously,
user sessions performing work will typically not have to wait for a checkpoint
to complete. However checkpoints can effect overall system performance since
they are fairly resource intensive operations, even though they occur in
the background. Checkpoints are, of course, absolutely necessary, but it
is quite possible for one checkpoint to begin (because of LOG_CHECKPOINT_INTERVAL
or LOG_CHECKPOINT_TIMEOUT settings) and partially complete, then be rolled-back
because another checkpoint was issued (perhaps because of a redo log switch).
It is desirable to avoid this checkpoint contention because it wastes considerable
resources that can be used by other processes. Checkpointing statistics
are readily available in the v$sysstat table, and the contention is fairly
simple to determine.
Target: 1 or less
Value:
SELECT value FROM v$sysstat WHERE name = 'background checkpoints started';
SELECT value FROM v$sysstat WHERE name = 'background checkpoints completed';
Checkpoints rolled-back = checkpoints started - checkpoints completed;
Correction:
- Increase the LOG_CHECKPOINT_TIMEOUT parameter (in seconds) in the
INIT.ORA file, or set it to 0 to disable time-based checkpointing. If time-based
checkpointing is not disabled, set it to checkpoint once per hour or more.
- Increase the LOG_CHECKPOINT_INTERVAL parameter (in db blocks) in the
INIT.ORA file, or set it to an arbitrarily large value so that change-based
checkpoints will only occur during a redo log switch.
- Examine the redo log size and the resulting frequency of redo log
switches.
Other notes: Note that regardless of the checkpoint frequency,
no data is lost in the event of an instance crash. All changes are recorded
to the redo logs and would be applied during instance recovery on the next
startup, so checkpoint frequency will impact the time required for instance
recovery. Presented below is a typical scenario:
- Set the LOG_CHECKPOINT_INTERVAL to an arbitrarily large value, set
the LOG_CHECKPOINT_TIMEOUT to 2 hours, and size the redo logs so that a
log switch will normally occur once per hour. During times of heavy OLTP
activity, a change-based log switch will occur approximately once per hour,
and no time-based checkpoints will occur. During periods of light OLTP activity,
a time-based checkpoint will occur at least once every two hours, regardless
of the number of changes. Setting the LOG_CHECKPOINT_INTERVAL arbitrarily
large allows change-based checkpoint frequency to be adjusted during periods
of heavy use by re-sizing the redo logs on-line rather than adjusting the
initialization parameter and performing an instance shutdown/restart.
9) Rollback Segment Contention:
Rollback segments are the structures into which undo information for
uncommited changes are temporarily stored. This behavior serves two purposes.
First, a session can remove a change that was just issued by simply issuing
a ROLLBACK rather than a COMMIT. Second, read consistency is established
because a long-running SELECT statement against a table that is constantly
being updated (for example) will get data that is consistent with the start
time of the SELECT statement by reading undo information from the appropriate
rollback segment. (Otherwise, the answer returned by the long-running SELECT
would vary depending on whether that particular block was read before the
update occurred, or after.) Rollback segments become a bottleneck when there
are not enough to handle the load of concurrent activity, in which case,
sessions will wait for write access to an available rollback segment. Some
waits for rollback segment data blocks or header blocks (usually header blocks)
will always occur, so criteria for tuning is to limit the waits to a very
small percentage of the total number of all data blocks requested. Note
that rollback segments function exactly like table segments or index segments:
they are cached in the buffer cache, and periodically checkpointed to disk.
Target: 1% or less
Value:
Rollback waits = SELECT max(count) FROM v$waitstat
WHERE class IN ('system undo header', 'system undo block','undo header',
'undo block')
GROUP BY class;
Block gets = SELECT sum(value) FROM v$sysstat WHERE name IN ('consistent
gets','db block gets');
Rollback segment contention ratio = rollback waits / block gets
Correction: Create additional rollback segments.
* Check DB Parameters
select substr(name,1,20), substr(value,1,40), isdefault, isses_modifiable,
issys_modifiable
from v$parameter
where issys_modifiable <> 'FALSE'
or isses_modifiable <> 'FALSE'
order by name;
* Las sentencias SQL que se ejecuten deben ser iguales (incluso las minusculas y espacios), de esa forma seran reutilizadas si se encuentran en memoria (SHARED_SQL_AREA). Ademas los objetos a los que haga referencia deben ser iguales
* Size of Database
compute sum of bytes on report
break on report
Select tablespace_name, sum(bytes) bytes
From dba_data_files
Group by tablespace_name;
* How much Space is Left?
compute sum of bytes on report
Select tablespace_name, sum(bytes) bytes
From dba_free_space
Group by tablespace_name;
* VALORES DE MEMORIA. Para verlos uso:
select substr(name,1,35) name, substr(value,1,25) value
from v$parameter
where name in ('db_block_buffers','db_block_size',
'shared_pool_size','sort_area_size');
* Identify the SQL responsible for the most BUFFER HITS and/or
DISK READS. Si quiero ver que hay en el SQL AREA hago:
SELECT SUBSTR(sql_text,1,80) Text, disk_reads, buffer_gets, executions
FROM v$sqlarea
WHERE executions > 0
AND buffer_gets > 100000
and DISK_READS > 100000
ORDER BY (DISK_READS * 100) + BUFFER_GETS desc
/
The column BUFFER_GETS is the total number of times the SQL statement
read a database block from the buffer cache in the SGA. Since almost every
SQL operation passes through the buffer cache, this value represents the
best metric for determining how much work is being performed. It is not perfect,
as there are many direct-read operations in Oracle that completely bypass
the buffer cache. So, supplementing this information, the column DISK_READS
is the total number times the SQL statement read database blocks from disk,
either to satisfy a logical read or to satisfy a direct-read. Thus, the
formula:
(DISK_READS * 100) + BUFFER_GETS
is a very adequate metric of the amount of work being performed by a
SQL statement. The weighting factor of 100 is completely arbitrary, but
it reflects the fact that DISK_READS are inherently more expensive than
BUFFER_GETS to shared memory.
Patterns to look for
DISK_READS close to or equal to BUFFER_GETS This indicates that most
(if not all) of the gets or logical reads of database blocks are becoming
physical reads against the disk drives. This generally indicates a full-table
scan, which is usually not desirable but which usually can be quite easy
to fix.
* Finding the top 25 SQL
declare
top25 number;
text1 varchar2(4000);
x number;
len1 number;
cursor c1 is
select buffer_gets, substr(sql_text,1,4000)
from v$sqlarea
order by buffer_gets desc;
begin
dbms_output.put_line('Gets'||' '||'Text');
dbms_output.put_line('----------'||
' '||'----------------------');
open c1;
for i in 1..25 loop
fetch c1 into top25, text1;
dbms_output.put_line(rpad(to_char(top25),9)||
' '||substr(text1,1,66));
len1:=length(text1);
x:=66;
while len1 > x-1 loop
dbms_output.put_line('"
'||substr(text1,x,66));
x:=x+66;
end loop;
end loop;
end;
/
* Displays the porcentage of SQL executed that did NOT incur an expensive
hard parse. So a low number may indicate a literal SQL or other
sharing problem.
Ratio success is dependant on your development environment. OLTP
should be 90 percent.
select 100 * (1-a.hard_parses/b.executions) noparse_hitratio
from (select value hard_parses
from v$sysstat
where name = 'parse count (hard)'
) a
,(select value executions
from v$sysstat
where name = 'execute count') b;
* HIT RATIO BY SESSION:
column HitRatio format 999.99
select substr(Username,1,15) username,
Consistent_Gets,
Block_Gets,
Physical_Reads,
100*(Consistent_Gets+Block_Gets-Physical_Reads)/
(Consistent_Gets+Block_Gets) HitRatio
from V$SESSION, V$SESS_IO
where V$SESSION.SID = V$SESS_IO.SID
and (Consistent_Gets+Block_Gets)>0
and Username is not null;
* IO PER DATAFILE:
select substr(DF.Name,1,40) File_Name,
FS.Phyblkrd Blocks_Read,
FS.Phyblkwrt Blocks_Written,
FS.Phyblkrd+FS.Phyblkwrt Total_IOs
from V$FILESTAT FS, V$DATAFILE DF
where DF.File#=FS.File#
order by FS.Phyblkrd+FS.Phyblkwrt desc;
* Reporte por usuarios
select substr(username,1,10) "Username", created "Created",
substr(granted_role,1,25) "Roles",
substr(default_tablespace,1,15) "Default TS",
substr(temporary_tablespace,1,15) "Temporary TS"
from sys.dba_users, sys.dba_role_privs
where username = grantee (+)
order by username;
* Para ver el ESPACIO que queda LIBRE en cada TABLESPACE
hago:
select substr(a.tablespace_name,1,10) tablespace,
round(sum(a.total1)/1024/1024, 1) Total,
round(sum(a.total1)/1024/1024, 1)-
round(sum(a.sum1)/1024/1024, 1) used,
round(sum(a.sum1)/1024/1024, 1) Free,
round(sum(a.sum1)/1024/1024,1)*100/round(sum(a.total1)/1024/1024,1) porciento_fr,
round(sum(a.maxb)/1024/1024, 1) Largest,
max(a.cnt) Fragment
from (select tablespace_name, 0 total1, sum(bytes) sum1,
max(bytes) MAXB,count(bytes) cnt
from dba_free_space
group by tablespace_name
union
select tablespace_name, sum(bytes) total1, 0, 0, 0
from dba_data_files
group by tablespace_name) a
group by a.tablespace_name
/
* Segments whose next extent can't fit
select substr(owner,1,10) owner, substr(segment_name,1,40) segment_name,
substr(segment_type,1,10) segment_type, next_extent
from dba_segments
where next_extent>
(select sum(bytes) from dba_free_space
where tablespace_name = dba_segments.tablespace_name);
* Find Tables/Indexes fragmented into > 15 pieces
Select substr(owner,1,8) owner, substr(segment_name,1,42) segment_name,
segment_type, extents
From dba_segments
Where extents > 15;
* COALESCING FREE SPACE = Los distintos bloques libres (chunks)
que sean adjuntos se pueden juntar en uno mas grande. Inspecciono con:
select file_id, block_id, blocks, bytes from dba_free_space
where tablespace_name = 'xxx' order by 1,2;
Esto me devuelve una lista de resultados. Si file_id de 2 filas es igual
y el block_id + blocks = Block_id de la fila siguiente, entonces los puedo
juntar.
Se hace con ALTER TABLESPACE XX COALESCE;
* Mini script para coalesce todos los tablespaces
set echo off pages 0 trimsp off feed off
spool coalesce.sql
select 'alter tablespace '||tablespace_name||' coalesce;'
from sys.dba_tablespaces
where tablespace_name not in ('TEMP','ROLLBACK');
spool off
@coalesce.sql
host rm coalesce.sql
* Information about a Table
Select Table_Name, Initial_Extent, Next_Extent,
Pct_Free, Pct_Increase
From dba_tables
Where Table_Name = upper('&Table_name');
* Information about an Index:
Select Index_name, Initial_Extent, Next_Extent
From Dba_indexes
Where Index_Name = upper('&Index_name');
* Para saber cuan vacios tengo los INDICES analizarlos y luego
correr el siguiente script. Si el ratio es > al 20%, entonces reconstruir
el indice.
select lf_rows, lf_rows_len, del_lf_rows, del_lf_rows_len,
(del_lf_rows * 100) / lf_rows "Ratio"
from index_stats where name = upper('&Index_name');
* Fixing Table Fragmentation
Example: CUSTOMER Table is fragmented
Currently in 22 Extents of 1M each.
(Can be found by querying DBA_EXTENTS)
CREATE TABLE CUSTOMER1
TABLESPACE NEW
STORAGE (INITIAL 23M NEXT 2M PCTINCREASE 0)
AS SELECT * FROM CUSTOMER;
DROP TABLE CUSTOMER;
RENAME CUSTOMER1 TO CUSTOMER;
(Create all necessary privileges,grants, etc.)
* PINS and UNPIN objects:
execute dbms_shared_pool_keep('object_name','P
o R o Q');
Segun sea una procedure, trigger o sequence respectivamente. Previamente
debi haber corrido el package dbmspool.sql y prvtpool.plb como sys o internal
y haberle dado grant execute on dbms_shared_pool. Para sacarlo uso lo mismo
con unkeep
exec dbms_shared_pool.unkeep('SCOTT.TEMP','P');
Si quiero que una tabla se quede en memoria, al crearla le agrego al
final la palabra CACHE. Otra forma de forzarla a quedarse en memoria es
usando el hint: /*+ cache(table) */.
Para que se haga automaticamente:
1- Create the following Trigger
CREATE OR REPLACE TRIGGER db_startup_keep
AFTER STARTUP ON DATABASE
BEGIN
sys.dbms_shared_pool.keep('SYS.STANDARD');
sys.dbms_shared_pool.keep('SYS.DBMS_STANDARD');
sys.dbms_shared_pool.keep('SYS.DBMS_UTILITY');
sys.dbms_shared_pool.keep('SYS.DBMS_DESCRIBE');
sys.dbms_shared_pool.keep('SYS.DBMS_OUTPUT');
END;
2- The following Oracle core packages owned by user SYS should be pinned
in the shared PL/SQL area:
DUTIL
STANDARD
DIANA
DBMS_SYS_SQL
DBMS_SQL
DBMS_UTILITY
DBMS_DESCRIBE
DBMS_JOB
DBMS_STANDARD
DBMS_OUTPUT
PIDL
3- Run the following Script to check pinned/unpinned packages
SELECT substr(owner,1,10)||'.'||substr(name,1,35) "Object Name",
' Type: '||substr(type,1,12)||
' size: '||sharable_mem ||
' execs: '||executions||
' loads: '||loads||
' Kept: '||kept
FROM v$db_object_cache
WHERE type in ('TRIGGER','PROCEDURE','PACKAGE BODY','PACKAGE')
-- AND executions > 0
ORDER BY executions desc,
loads desc,
sharable_mem
desc;
* ROW_CHAINING = Se produce cuando lo que se va a updatear es
mayor en espacio que lo que habia disponible, entonces Oracle crea encadenamientos.
Esto es muy costoso en tuning (especialmente en VLDB). Para ver las cadenas
hago:
analyze table xxx list chained rows
Luego vere el table_name y el rowid de los encadenados, si son varios
debere resolverlo. El select es:
select head_rowid from chained_rows where table_name = upper('&Table_name');
Para resolverlo:
-- Hago copia de los encadenados en una table temp
create table add_chained as
select * from xxx
where rowid in (select head_rowid from chained_rows where table_name
= upper('&Table_name'));
delete xx where row_id in (select head_rowid from chained_rows where
table_name = upper('&Table_name'));
-- Traigo las filas de nuevo
insert into xxx select * from add_chained;
drop table add_chained;
* To find out chained rows
ANALYZE TABLE TEST ESTIMATE STATISTICS;
Then from DBA_TABLES,
SELECT (CHAIN_CNT / NUM_ROWS) * 100 FROM DBA_TABLES WHERE TABLE_NAME
= upper('&Table_name');
3 DISCOS
Disk 1: SYSTEM tablespace, control file, redo log
Disk 2: INDEX tablespace, control file, redo log, ROLLBACK tablespace
Disk 3: DATA tablespace, control file, redo log
or
Disk 1: SYSTEM tablespace, control file, redo log
Disk 2: INDEX tablespace, control file, redo log
Disk 3: DATA tablespace, control file, redo log, ROLLBACK tablespace
4 DISCOS
1- exec, redo logs, export files, control files
2- data, temp, control files
3- indexes, control files
4- archive logs, rollback segs, control files
5 DISCOS
1- exec, redo logs, system tablespace, control files
2- data, temp, control files
3- indexes, control files
4- rollback segments, export, control files
5- archive, control files
PCTFREE, PCTUSED, INITRANS and MAXTRANS
.PCTFREE:
Este parámetro se utiliza para
modificar el comportamiento de Oracle a la hora de insertar y modificar
filas dentro de un bloque de datos o data block. Este parámetro indica
el porcentaje mínimo que se debe dejar libre para modificaciones
de los datos de las filas que ya existen dentro del bloque. Hay que tener
en cuenta que el espacio de un bloque no está compuesto solamente
por los datos, sino que también hay un overhead, por lo que si asignamos
a un segmento de tipo tabla un pctfree de 20, no estamos dejando para insercciones
el 80% sino el 80% menos lo que ocupe el overhead del bloque.
El concepto de pctfree se entiende mejor con un ejemplo. Si a una tabla le asignamos un pctfree de 20, le estamos diciendo que se pueden insertar filas en el hasta que quede libre en dicho bloque solamente el 20 por ciento. A partir de ese instante, todas las filas nuevas que se creen se crearán en otros bloques ya que en este ya no queda sitio para más. Entonces, ¿qué ocurre con este 20%?. Pues muy sencillo, se utiliza para las modificaciones de las filas que ya están en el bloque. Cuando se modifica una fila y se aumenta el contenido de un campo, por ejemplo, el campo "nombre" tenía el valor 'Jesus' y ahora pasa a tener el valor 'Jesus Maria', Oracle echa mano del espacio libre dentro del bloque para poder realizar esta operación.
Por lo tanto, este espacio podría incluso llenarse lo cual, en caso de seguir haciendo modificaciones de este estilo, podría generarnos filas migradas, como se explica más adelante. Sin embargo, el espacio libre en un bloque también puede aumentar, bien borrando filas o bien haciendo updates que disminuyan el valor de los campos de las filas que existen en dicho bloque. Cuando se hace espacio libre suficiente en el bloque se permite otra vez la insercción de registros en el mismo. El concepto de "espacio libre suficiente" para volver a permitir inserciones en el bloque es lo que se define con el parámetro PCTUSED.
o Si no se especifica valor para este parámetro, el valor por defecto que tomará el objeto para este valor será 10%.
o Un PCTFREE bajo hace que
se reserve menos espacio para expandir las filas como resultado de una
actualización, las inserciones tengan como efecto mejorar la ocupación
de los bloques de datos, y pueda ahorarse espacio ya que permitirá
almacenar más filas por bloque.
o Un PCTFREE alto hace que
se reserve más espacio para aquellas actualizaciones que tengan
como efecto aumentar el espacio requerido por una fila, sea necesario más
espacio para poder almacenar un número determinado de filas,y el
rendimiento de las operaciones de actualización mejore ya que disminuye
la probabilidad de que una fila se encuentre encadenada.
.PCTUSED:
El concepto de pctused está directamente
relacionado con pctfree. Supongamos que se crea un segmento (tabla o índice)
y se le asigna un pcfree de 20, por lo que todos sus bloques tendrán
dicho pctfree. Como ya hemos explicado, esto quiere decir que podremos insertar
filas o registros en uno de sus bloques hasta que se llene al 80 por ciento.
A partir de ese momento ya no se pueden insertar nuevos registros hasta
que se libere espacio en el bloque, o sea, hasta que vuelva a aumentar el
espacio libre. Llegados a este punto
nos hacemos 2 preguntas:
o Si no se especifica valor para este parámetro, el valor por defecto que tomará la tabla para este valor será 40%.
o Un PCTUSED bajo hace que
se reduzca el costo de procesar las operaciones de actualización
y eliminación que impliquen mover bloques de datos a la lista de
espacio disponibles, ya que la probabilidad de que un bloque se considere
como libre se ve reducida directamente por el valor de PCTUSED, y
aumente el espacio no utilizado en una base de datos.
o Un PCTUSED alto hace que
Aumente el costo de procesamiento de operaciones de inserción
y actualización ya que disminuirá el número de bloques
disponibles de un segmento de datos para realizar estas operaciones, y
mejore la utilización del espacio y el rendimiento de las operaciones
de actualización mejore, ya que disminuye la probabilidad de que
una fila se encuentre encadenada.
Es fundamental conocer que uso va a tener una tabla o cluster para
poder asignar los valores de PCTFREE y PCTUSED. Si los valores de PCTFREE
y PCTUSED son asignados (no se utilizan los valores por defecto de ORACLE),
debe tenerse en cuenta que:
o La suma de los valores de PCTFREE y PCTUSED debe ser menor o igual
a 100.
o Si la suma de los valores de PCTFREE y PCTUSED es de 100 el costo
de procesamiento de todas las operaciones es mucho mayor, ya que nunca
se tendrán escenarios en los que el espacio reservado por PCTFREE
sea mayor al valor porcentual que indica este parámetro.
o Cuanto menor sea la diferencia entre 100 y la suma de los valores
de PCTFREE y PCTUSED mejorará la utilización del espacio
asignado a la tabla, a expensas de una degradación en el rendimiento
de las operaciones sobre la misma.
o Algunas de las recomendaciones sugeridas para tablas con determinado
comportamiento son los siguientes:
- Si una tabla está sujeta a una frecuencia muy alta de operaciones
de actualización en las que se aumente el tamaño de las
columnas actualizadas, un valor del 20% para PCTFREE y 40% para PCTUSED
permitirá mantener un espacio relativamente adecuado para albergar
las modificaciones a las columnas y dejará con una probabilidad
moderada espacio adicional al asignado al PCTFREE y garantizará
un buen rendimiento en las operaciones de actualización. En este
caso se considera que la frecuencia relativa de operaciones de eliminación
frente a las operaciones de inserción y modificación no está
desbalanceada.
- Si una tabla es muy volátil (ocurren muchas operaciones de inserción y eliminación) y las actualizaciones ocurren bien sea sobre columnas que no varían o varían muy poco su tamaño (por ejemplo columnas de tipo DATE, NUMBER o CHAR), o sobre columnas variables pero no se afecta enormemente el tamaño inicial de la fila, un valor del 5% para PCTFREE y 60% para PCTUSED es adecuado. En este se reserva muy poco espacio para actualizaciones y se mantiene un límite de operación adecuado para considerar los bloques de datos como libres para poder albergar nuevas filas.
-En el caso de tablas muy grandes con una alta frecuencia de operaciones de consulta, baja frecuencia de operaciones de eliminación, inserción o actualización, y limitaciones en el uso del espacio físico se recomienda un valor de 5% para PCTFREE y 40-50% para PCTUSED. En este caso se reserva poco espacio para actualizaciones y un valor normal para garantizar un adecuado uso del espacio.
- En el caso de tablas maestras, que por lo general poseen un número de valores no muy grande y con escasa (o nula) actividad de operaciones de inserción, eliminación o actualización, se recomienda un valor del 5% para PCTFREE y 20% para PCTUSED. En este caso se reserva muy poco espacio para actualizaciones (son prácticamente nulas) y se prefiere mejorar el rendimiento a cuenta de empeorar el uso del espacio, el cual no debe ser un factor crítico debido a la baja cardinalidad de la tabla.
- En el caso de índices sólo aplican las convenciones
estudiadas para el caso de PCTFREE, dado que el parámetro PCTUSED
no se permite para estas estructuras. A este respecto, se deberá tomar
en cuenta las operaciones de actualización e inserción, recordando
que la eliminación de claves de un índice tiene un tratamiento
particular ya que un bloque de datos no se hace disponible mientras no se
eliminen todas las entradas del índice.
Para consultar el valor tanto del parámetro
pctfree como del parámetro pctused de cada segmento de tipo tabla
o de tipo índice, podemos leer las vistas dba_tables y dba_indexes
del usuario SYS.
Select owner, table_name, pct_free, pct_used from dba_tables;
Select owner, index_name, pct_free from dba_indexes;
Para modificar el valor de los parámetros
de una tabla o de un índice se pueden utilizar las siguientes sentencias:
Alter table nombre_de_tabla pctfree nuevo_pct_free;
Alter table nombre_de_tabla pctused nuevo_pct_used;
Alter index nombre_de_indice pctfree nuevo_pct_free;
.INITTRANS y MAXTRANS.
El valor asignado a estos parámetros dependerá del número
de usuarios concurrentes que accederán simultáneamente entradas
ubicadas en un mismo bloque de datos. Recordemos que cuando un proceso
está modificando un registro de una tabla, realiza un bloqueo a nivel
del registro y, además, ocupa un slot en la cabecera del bloque Oracle
donde está ese registro. Si antes de liberar ese bloqueo (commit
o rollback), un nuevo proceso intenta modificar otro registro que está
en el mismo bloque, necesita ocupar otro slot en la cabecera de ese bloque.
Si en el bloque no quedan slots libres, ni tampoco hay espacio libre para
reservar nuevos slots, el segundo proceso debe esperar hasta que el primero
libere el slot que está ocupando. El número de estos slots
reservados inicialmente cuando se crea la tabla viene dado por el valor de
INITRANS. Este parámetro controla la concurrencia a nivel de bloque
para procesos que actualizan registros. Deberá tener un valor tan
alto como el número de transacciones concurrentes que vayan a estar
modificando el mismo bloque. Su valor por defecto es 1. Alguno de los criterios
que pueden seguir para definir estos parámetros son los siguientes:
o Si el espacio ocupado por un objeto es muy grande y el número de usuarios que la utilizan es pequeño, entonces estamos frente una situación donde la ocurrencia de accesos concurrentes parece ser poco frecuente, por lo que para minimizar el espacio administrativo en el encabezado el parámetro INITTRANS debe ser pequeño.
o Por el contrario si tenemos un objeto cuyos bloques de datos serán
accedidos con una alta probabilidad por múltiples usuarios, resulta
razonable colocar como valor del INITTRANS un estimado del número
de accesos concurrentes esperados inicialmente para de esta forma minimizar
el tiempo de espera requerido para obtener nuevo espacio para el almacenamiento
de información de accesos concurrentes. Adicionalmente en esta
situación se debe especificar un valor mayor para el parámetro
MAXTRANS para evitar que en situaciones extremas de uso del objeto un
usuario tenga que esperar por haberse superado el número máximo
permitido de accesos concurrentes.
.FREELISTS
Un proceso que está insertando registros en una tabla necesita
obtener una lista de bloques donde pueda guardar los registros. Un bloque
está o no en esta lista dependiendo de su nivel de ocupación
y de los valores de los parámetros PCTFREE y PCTUSED. Cuando a un
proceso se le asigna una lista de bloques libres (una free list) ningún
otro proceso que esté insertando podrá hacerlo en los bloques
que contiene esa lista. Para evitar contención, un valor interesante
para FREELISTS será el máximo número de procesos concurrentes
que se espera que realicen inserciones en la tabla.
The performance can be improved by properly using PCTUSED and PCTFREE
storage parameters, these parameters are space related and have control
over the data blocks.
When a segment is created, it will acquire at least one or more extents
based on the value you put in for minextents. Let's say you've specified
minextents equal to one. The initial extent will be used to store data until
it no longer has any free space available.Will the extent be completely
filled up? Well, that depends on the size established in the pctfree
clause.
This clause tells Oracle the percentage of each block in each extent
to reserve for updates of existing rows. In other words, if you are inserting
data into a row of a table but you don't know all the values for the row,
the pctfree clause will tell Oracle how much space to save so you can come
back later and expand the row by an update without having to move it to
a new block.
There is also a pctused clause, wich informs Oracle that no new
inserts can be placed into the block unless the block is less than the value
of pctused full. For example, suppose pctfree is 10% and
pctused is 40% (the defaults). Inserts will be placed into the
block until the block becomes 90 percent full, which is 100% of the pctfree
value. Since you have the minimum amount of space free (as determined
by the pctfree value), a new insert cannot be placed into the block, so a
new block will be allocated and the row will be placed there. Updates to existing
rows within the block will use the 10% of free space. If, or when, space
in the block is freed via data being removed, inserts will not take place
into the block until the block becomes less than 40% full - the pctused
value. Once the block becomes less than 40% full, inserts will be written
into the block until it becomes 90% full again.
When additional data is added to the segment, the
segment will extend by obtaining a second extent of the size specified by
the next parameter. There is no guarantee that the second extent
will be stored physically next to (or contiguos to) the first extent.
Theoretically, PCTUSED can be considered as low water
mark and PCTFREE as the high water mark. If the space in a data block is
such that there is less space left than PCTFREE, no new rows can be added
in that block until the amount of space in the table is less than PCTUSED.
The total of PCTUSED and PCTFREE cannot be over 100. (Default values for
PCTFREE and PCTUSED are 10 and 40.)
PCTFREE depends on the kind of updates you are going to do, meaning
how much are the records going to grow after initial creation. For example
if you have a table with 2 varchar2 (256) fields. If they start out empty
and then are filled in later you may want to have your pctfree = 50% but
if you expect your records to grow in size by only 20 % then use that for
your pctfree.
As for pctused a common formula it to have pctfree + pctused = 85. So
take 85 and subtract your pctfree value and you get your pctused value.
Example:
Consider having following values for a DDL statement storage parameters:
PCTFREE = 30
PCTUSED = 50
Then, new rows can be added to the data block until the data block becomes 70% FULL (100 PERCENT - PCTFREE). When this occurs, no new rows can be added to this data block. The space is reserved for growth of existing rows.
PCTFREE
· A high value for PCTFREE may improve performance because blocks
have to be reorganized less frequently and chaining is also reduced. There
is more space
for growth of existing rows.
· A low value for PCTFREE may reduce performance since reorganization
becomes more often and chaining would be increased.
PCTUSED
· A high value for PCTUSED may decrease performance because more
migrated and chained rows are present. But this reduces space wastage by
filling the data
block more completely.
· A low value for PCTUSED may increase performance because of
less migrated and chained rows. But the space usage is not efficient due
to unused space in data blocks.
Tips for PCTUSED and PCTFREE from Metalink Notes
· If the application frequently performs UPDATES that alter sizes
of rows greatly, then PCTFREE can be set high and PCTUSED can be set low.
This would
allow for large amount of space in data blocks for row size growth.
· If there is more INSERT activity with less UPDATES, the PCTFREE
can be set low with average value for PCTUSED to avoid chaining of rows.
· If the main concern is performance and more space is
available, then PCTFREE can be set very high and PCTUSED very low.
· If the main concern in space and not performance, then
PCTFREE can set very low and PCTUSED very high.
ANALYZE
* NUNCA correr statics sobre el user SYS o SYSTEM
* Para analizar se aconseja un sample de un 20% de la tabla y hacerlo
una vez por semana. Este es un script util:
REM Analyze all tables and indexes
set pagesize 0 feedback off trimspool on linesize 999 echo off
spool an_DB.sql
prompt spool anal_DB.txt
select 'analyze index' || owner || '.' || index_name || ' compute statistics;'
from sys.dba_indexes
where owner not in ('SYS','SYSTEM');
select 'analyze table' || owner || '.' || table_name ||
' estimate statistics sample 20 percent'
from sys.dba_tables
where owner not in ('SYS','SYSTEM');
spool off
@anal_DB.sql
Opciones:
- Estimar segun todas las filas
DBMS_UTILITY.ANALYZE_SCHEMA('userid',
'COMPUTE');
- Estimar el 20% de todas las tablas de un usuario
DBMS_UTILITY.ANALYZE_SCHEMA('userid',
'ESTIMATE',NULL,20);
- Estimar el 20% de una tabla
DBMS_UTILITY.ANALYZE_SCHEMA('TABLE'
, 'schema', 't_name', 'ESTIMATE',null,20);
o
ANALYZE TABLE table ESTIMATE
STATISTICS sample 20 percent;
- Estimar el 20% de un indice
DBMS_UTILITY.ANALYZE_SCHEMA('INDEX'
, 'schema', 'i_name', 'COMPUTE';
- Estimar 1000 rows de todas las tablas de un usuario
DBMS_UTILITY.ANALYZE_SCHEMA
('userid', 'ESTIMATE', 100000);
o
ANALYZE TABLE table ESTIMATE
STATISTICS sample 5000 rows;
- Borrar todas las estadisticas
DBMS_UTILITY.ANALYZE_SCHEMA
('userid', 'DELETE');
o
ANALYZE TABLE table DELETE
STATISTICS;
DBMS_STATS.GATHER_SCHEMA_STATS.
The PL/SQL package DBMS_STATS lets you generate and manage statistics
for cost-based optimisation. You can use this package to gather, modify,
view, export, import, and delete statistics.
The DBMS_STATS package can gather statistics on indexes, tables, columns,
and partitions, as well as statistics on all schema objects in a schema
or database. The statistics-gathering operations can run either serially
or in parallel (DATABASE/SCHEMA/TABLE only)
Procedure Name | Description |
GATHER_TABLE_STATS | Collects table, column, and index statistics. |
GATHER_INDEX_STATS | Collects index statistics. |
GATHER_SCHEMA_STATS | Collects statistics for all objects in a schema. |
GATHER_DATABASE_STATS | Collects statistics for all objects in a database. |
GATHER_SYSTEM_STATS | Collects CPU and I/O statistics for the system. |
Previous to 8i, you would be using the ANALYZE ... methods. However 8i onwards, using ANALYZE for this purpose is not recommended because of various restrictions; for example:
Example:
execute dbms_stats.gather_table_stats (ownname => 'SCOTT'
, tabname => 'DEPT'
, partname=> null
, estimate_percent => 20
, degree => 5
, cascade => true);
execute dbms_stats.gather_schema_stats (ownname => 'SCOTT'
, estimate_percent => 20
, degree => 5
, cascade => true);
execute dbms_stats.gather_database_stats (estimate_percent => 20
, degree => 5
, cascade => true);
SQL Source - Dynamic Method
DECLARE
sql_stmt VARCHAR2(1024);
BEGIN
FOR tab_rec IN (SELECT owner,table_name
FROM all_tables WHERE owner like UPPER('&1')
) LOOP
sql_stmt := 'BEGIN dbms_stats.gather_table_stats
(ownname => :1, tabname
=> :2,partname=> null, estimate_percent
=> 20, degree => 5 ,cascade => true); END;' ;
EXECUTE IMMEDIATE sql_stmt USING tab_rec.owner, tab_rec.table_name ;
END LOOP;
END;
/
* Para ver la informacion analizada chequear el diccionario:
DBA_TABLES -> owner, table_name, num_rows, blocks, emptiy_blocks,
avg_space,
chain_cnt, avg_row_len, sample_size, last_analyzed
DBA_INDEXES -> owner, INDEX_name, leaf_blocks, distinct_keys,
avg_leaf_blocks_per_key, avg_data_blocks_per_key
Tambien DBA_PART_COL_STATISTICS y DBA_TAB_COL_STATISTICS
EXPLAIN PLAN
* Lo mejor es usar el SET AUTOTRACE TRACE, esto hace que no me devuelva
la query, que me de el plan_table y la velocidad. Si solo quiero el plan,
entonces usar SET AUTOTRACE TRACE EXP. Estas son las opciones:
SET AUTOTRACE ON EXPLAIN
- The AUTOTRACE report shows only the optimizer execution path.
SET AUTOTRACE ON STATISTICS - The AUTOTRACE
report shows only the SQL statement execution statistics.
SET AUTOTRACE ON
- The AUTOTRACE report includes both the optimizer execution path and
the SQL statement execution statistics.
SET AUTOTRACE TRACEONLY
- Like SET AUTOTRACE ON, but suppresses the printing of the user's query
output, if any.
* Para crear la tabla uso $ORACLE_HOME/rdbs/admin/utlxplan.sql. Luego
creare el plan con
EXPLAIN PLAN FOR
select .............
Para ver el plan uso:
SELECT LPAD(' ',2*(LEVEL-1))||operation||' '||options
||' '||object_name
||' '||DECODE(id, 0, 'Cost = '||position) "Query Plan"
FROM plan_table
START WITH id = 0
-- AND statement_id = 'DIE'
CONNECT BY PRIOR id = parent_id
-- AND statement_id ='DIE';
HINTS
- Hints always force the use of the cost based optimizer (Except RULE).
- Use ALIASES for the tablenames in the hints.
- Ensure tables are analyzed.
- Syntax: /*+ HINT HINT ... */ (In PLSQL the space between the '+' and the first letter of the hint is vital so /*+ ALL_ROWS */ is fine but /*+ALL_ROWS */ will cause problems
ALL_ROWS:
El hint ALL_ROWS le indica a Oracle que optimize la consulta para el
mejor rendimiento; esto es, minimizando el tiempo que supone obtener todas
las filas devueltas por la consulta. Esta es la opción por defecto
del optimizador de Oracle y es apropiado para operaciones BATCH.c Un ejemplo
se muestra a continuación:
select /*+ ALL_ROWS */
COMPANY.Name
from COMPANY, SALES
where COMPANY.Company_ID = SALES.Company_ID
and SALES.Period_ID =3
and SALES.Sales_Total>1000;
El ejemplo presentado utilizaría normalmente un NESTED LOOPS
durante la ejecución de la consulta. ALL_ROWS fuerza que el optimizador
considere la utilización de MERGE JOIN.
NOTA: Si se especifica ALL_ROWS, deben haberse analizado previamente
las tablas usadas en la consulta, si no se ha hecho el optimizador las estimará
y las usará en la elección del plan de ejecución.
AND-EQUAL:
El hint AND-EQUAL le dice al optimizador que realiza una operación
AND-EQUAL en los índices indicados en el hint. Un ejemplo:
select /*+ AND-EQUAL COMPANY$CITY, COMPANY$STATE
*/
Name, City, State
from COMPANY
where City = 'Roanoke'
and State = 'VA';
CLUSTER:
El hint CLUSTER le dice al optimizador que utilice la operación
TABLE ACCESS CLUSTER. La sintaxis del hint es:
/*+ CLUSTER(table) */
FIRST_ROWS:
El hint FIRST_ROWS es el opuesto de ALL_ROWS; le dice al optimizador
que optimice la consulta con el objetivo del obtener el menor tiempo de
respuesta en la obtención de la primera fila de la consulta. El hint
es ignorado si se utilizan funciones de grupo. Si hay índices disponibles
o es posible un NESTED LOOPS, el optimizador normalmente lo utilizará.
select /*+ FIRST_ROWS */
COMPANY.Name
from COMPANY, SALES
where COMPANY.Company_ID = SALES.Company_ID
and SALES.Period_ID =3
and SALES.Sales_Total>1000;
NOTA: Si se especifica ALL_ROWS, deben haberse analizado previamente las tablas usadas en la consulta, si no se ha hecho el optimizador las estimará y las usará en la elección del plan de ejecución.
FULL:
El hint FULL le dice al optimizador que realice un TABLE ACCESS FULL
en la tabla especificada. Puedes querer utilizar el HINT si se sabe que el
acceso por índice a la tabla es una mala opción debido a la
distribución de los datos.
Por ejemplo, considera la consulta mostrada a continuación. En
una distribución normal de los datos, una operación AND_EQAUL
en los índices City y State puede proporcionar un buen rendimiento.
select Name, City, State
from COMPANY
where City = 'Roanoke'
and State = 'VA';
¿Qué sucede si el 90 por ciento de los registro de la
tabla tienen en la columna City el valor ‘Roanoke’ y el State ‘VA’?, si
se utiliza el índice para acceder a los registros de ‘Roanoke’,
entonces casi todos los registros de la tabla satisfacen la condición,
entonces necesitaras leer casi todos los bloques del índice mas casi
todos los bloques de la tabla. En este caso requiere menos lecturas lógicas
recorrerse toda la tabla (mediante un TABLE ACCESS FULL). Se puede forzar
ese TABLE ACCESS FULL desabilitando los índices o usando el hint FULL,
como se muestra a continuación.
select /*+ FULL(COMPANY) */
Name, City, State
from COMPANY
where City = 'Roanoke'
and State = 'VA';
HASH:
El hint HASH le dice al optimizador que utilice la operación
TABLE ACCESS HASH. La sintaxis del hint es:
/*+ HASH(table) */
INDEX:
El hint INDEX puede ser utilizado de tres maneras distintas:
1. Si sólo se indica un índice,
se usará este índice.
2. Si se indican varios índices el optimizador
elegirá cual usar.
3. Si se indica una tabla, pero no se indican
índices el optimizador
elegirá que índice utilizar para esa tabla.
Dada la siguiente consulta, el optimizador elegirá entre usar
los índices en City, en State o ambos, como sea mas apropiado.
select /*+ INDEX(COMPANY) */
Name, City, State
from COMPANY
where City = 'Roanoke'
and State = 'VA';
INDEX_ASC:
El hint INDEX_ASC es igual que el hint INDEX.
INDEX_DESC:
El hint INDEX DESC le dice al optimizador que busque en el índice
en orden descendente. INDEX DESC es usado normalmente en subconsultas para
devolver la fila mas nueva de una tabla que contiene información
histórica.
NO_MERGE:
El hint NO_MERGE dice al optimizador que no combine la sintaxis de la
vista con la sintaxis de la consulta que utiliza la vista.
ORDERED:
El hint ORDERED, cuando es usado con joins del tipo NESTED LOOPS, influye
en el order en el cual se unen las tablas; esto es, la estructura del
join (cual será la tabla directora). Si usas este hint debes estar
seguro que la distribución de los datos en las tablas no cambie
mucho en el tiempo, de otra manera el orden especificado puede causar problemas
de rendimiento en el futuro.
ROWID:
El hint ROWID le dice al optimizador que utilice la operación
TABLE ACCESS BY ROWID. La sintaxis del hint es:
/*+ ROWID(table) */
RULE:
El hint RULE le dice al optimizador que utilize RBO en la sentencia.
Todos los demás hints serán ignorados.
select /*+ RULE */
COMPANY.Name
from COMPANY, SALES
where COMPANY.Company_ID = SALES.Company_ID
and SALES.Period_ID =3
and SALES.Sales_Total>1000;
USE_HASH:
El hint USE_HASH le dice al optimizador que realice un hash join.
USE_NL:
El hint USE_NL le dice al optimizador que realice un nested loops loin,
utilizando la tabla especificada como la directora del join. USE_NL es
un mas específico que el hint FIRTS_ROWS, ya que con USE_NL se influye
en el orden de las tablas en el nested loops.
select /*+ USE_NL(COMPANY) */
COMPANY.Name
from COMPANY, SALES
where COMPANY.Company_ID = SALES.Company_ID
and SALES.Period_ID =3
and SALES.Sales_Total>1000;
USE_MERGE:
El hint USE_MERGE es el opuesto al USE_NL. Le dice al optimizador que
utilice un merge join entre las tablas especificadas, siendo mas detallado
que el uso del hint ALL_ROWS.
select /*+ USE_MERGE(COMPANY, SALES) */
COMPANY.Name
from COMPANY, SALES
where COMPANY.Company_ID = SALES.Company_ID
and SALES.Period_ID =3
and SALES.Sales_Total>1000;
PARTITIONS
* Al crear la particion, cada una de estas toma para si lo que se ponga
en el initial y cada una arranca con un extent en 0. Son como tablas independientes.
* Ejemplo de particiones
CREATE TABLE DEPT
(DEPTNO NUMBER(2),
DEPT_NAME VARCHAR2(30))
PARTITION BY RANGE(DEPTNO)
(PARTITION D1 VALUES LESS THAN (10) TABLESPACE DEPT1,
PARTITION D2 VALUES LESS THAN (20) TABLESPACE DEPT2,
PARTITION D3 VALUES LESS THAN (MAXVALUE) TABLESPACE DEPT3);
INSERT INTO DEPT VALUES (1, ‘DEPT 1’);
INSERT INTO DEPT VALUES (7, ‘DEPT 7’);
INSERT INTO DEPT VALUES (10, ‘DEPT 10’);
INSERT INTO DEPT VALUES (15, ‘DEPT 15’);
INSERT INTO DEPT VALUES (22, ‘DEPT 22’);
SELECT * FROM DEPT;
DEPTNO DEPT
---------- --------------------
1 DEPT 1
7 DEPT 7
10 DEPT 10
15 DEPT 15
22 DEPT 22
SELECT * FROM DEPT PARTITION (D1);
DEPTNO DEPT
--------- --------------------
1 DEPT 1
7 DEPT 7
ALTER TABLE DEPT DROP PARTITION D3;
SELECT * FROM DEPT;
DEPTNO DEPT
---------- --------------------
1 DEPT 1
7 DEPT 7
10 DEPT 10
15 DEPT 15
SELECT TABLE_NAME, PARTITIONED FROM USER_TABLES;
TABLE_NAME PAR
------------------------------ ---
DEPT YES
TEMP NO
SELECT OWNER, TABLE_NAME, PARTITION_COUNT FROM SYS.DBA_PART_TABLES;
OWNER TABLE_NAME PARTITION_COUNT
----------- ----------------- ---------------
TEMP DEPT 2
SELECT SEGMENT_NAME, PARTITION_NAME, SEGMENT_TYPE, TABLESPACE_NAME
FROM USER_SEGMENTS;
SEGMENT_NAME PARTITION_NAME SEGMENT_TYPE TABLESPACE_NAME
------------ -------------- ----------------- ---------------
TEMP TABLE USER_DATA
DEPT D1 TABLE PARTITION DEPT1
DEPT D2 TABLE PARTITION DEPT2
* Partitioning Multi-Column
create table cust_sales (
acct_no number(5),
cust_name char(30),
sale_day integer not null,
sale_mth integer not null,
sale_yr integer not null)
partition by range (sale_yr, sale_mth, sale_day)
partition cust_sales_q1 values less than (1998, 04, 01) tablespace users,
partition cust_sales_q2 values less than (1998, 07, 01) tablespace users2,
partition cust_sales_q3 values less than (1998, 10, 01) tablespace users,
partition cust_sales_q4 values less than (1999, 01, 01) tablespace users2);
* Partitioning Indexes
create index dept_index on dept (deptno)
local
(partition d1 tablespace dept2,
partition d2 tablespace dept3,
partition d3 tablespace dept1);
* Es MUY CONVENIENTE poner a cada particion dentro de un tablespace distinto, a esos tbpsc crearlos con un nombre claro
* Si voy a borrar un tbspc con particiones hare:
alter table xxx truncate partition part_name
alter table xxx drop partition part_name
drop tablespace xxx
* Mover particiones = Alter table xxx move partition part_name tablespace tbscpc_name
* Agregar particion (en el medio)= Alter table xxx split partition part_name
* Split particion = Alter table xxx split partition part_vieja as
(values) into (partition new_name, partition new_name);
* Convertir una particion en una tabla total
alter table xx exchange partition part_name with table_new_no_part
* Particionamiento de indices, ver bien porque hay 2 clases, locales y globales.
* Las views USER_TAB_PARTITIONS y USER_IND_PARTITIONS nos mostraran
los rangos de particiones. Haremos
select partition_name, high_value from ....
* Para analizar una tabla o indice particionada uso:
analyxe table xxx partition (part_name) compute statistics;
analyze index xxx partitio (part_name)compute;
* Se aconseja analizar las tablas por particiones, no en conjunto
* Para importar/exportar particiones se pone igual que antes, pero en la parte del table= pongo table_name:part_name
* En el SQL*Loader pondremos: into table_name part_name(col)
PARALELISMO
* Es muy conveniente en VLDB o en Data warehouses, se usa con maquinas
y SO que permitan varias CPU's
* Los parametros a setear son:
- PARALLEL_MIN_SERVERS = Cant. minima de parallel query servers processes
que se iniciaran
- PARALLEL_MAZ_SERVERS = 2 * # de CPU's de los usuarios concurrentes
- OPTIMIZER_PORCENT_PARALLEL = Determina cuan agresivo el CBO hara paralelismo
en una operacion. Poner 100/cant. de usuarios concurrentes
- SHARED_POOL_SIZE = Poner current_value + ((3 * message_buffer_size)
* (CPU+2) * PARALLEL_MAX_SERVICES)
(GRALMENTE 2048)
- ALWAYS_ANTI_JOIN =hash
- ROW_LOCKING = ALWAYS
- COMPATIBLE = 8.1.7
* Basicamente se usaran HINTS para las queryes. Ademas se pueden crear tablas indicando el grado de paralelismo.
* Para mas informacion, ver capitulo 5 de Tuning