Experience: is what you get soon after you need it.

Experience: is what you get soon after you need it.

Rasul Allah (sal Allahu alaihi wa sallam) said: "Restore the trusts of those who trust you, and deal not falsely with him who deals falsely with you." [Abu Dawud, Tirmidhi]

Search This Blog

Sunday, April 18, 2010

RMAN EXPIRED VS OBSOLETE

EXPIRED:
If you run CROSSCHECK, and if RMAN cannot locate the files, then it updates their
records in the RMAN repository to EXPIRED status. You can then use the DELETE
EXPIRED command to remove records of expired backups and copies from the RMAN
repository.

OBSOLETE:
backups that are no longer needed to satisfy specified recoverability requirements. You can delete files obsolete according to the configured default retention policy, or another retention policy that you specify

Fast Recovery Area usage

SELECT * FROM V$RECOVERY_AREA_USAGE;
FILE_TYPE PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES
------------ ------------------ ------------------------- ---------------
CONTROLFILE 0 0 0
ONLINELOG 2 0 22
ARCHIVELOG 4.05 2.01 31
BACKUPPIECE 3.94 3.86 8
IMAGECOPY 15.64 10.43 66
FLASHBACKLOG .08 0 1

Backup Size

Query the V$RMAN_BACKUP_JOB_DETAILS view for the size of the backups in an
RMAN session.
If you run BACKUP DATABASE, then
V$RMAN_BACKUP_JOB_DETAILS.OUTPUT_BYTES shows the total size of backup
sets written by the backup job for the database that you are backing up. To view
backup set sizes for all registered databases, query
RC_RMAN_BACKUP_JOB_DETAILS.
The following query shows the backup job speed ordered by session key, which is
the primary key for the RMAN session. The columns in_sec and out_sec
display the data input and output per second.
COL in_size FORMAT a10
COL out_size FORMAT a10
SELECT SESSION_KEY,
INPUT_TYPE,
COMPRESSION_RATIO,
INPUT_BYTES_DISPLAY in_size,
OUTPUT_BYTES_DISPLAY out_size
FROM V$RMAN_BACKUP_JOB_DETAILS
ORDER BY SESSION_KEY;
The following sample output shows the speed of the backup jobs:
SESSION_KEY INPUT_TYPE COMPRESSION_RATIO IN_SIZE OUT_SIZE
----------- ------------- ----------------- ---------- ----------
10 DATAFILE FULL 1 602 50M 602.58M
17 DB FULL 1.137 634.80M 558.13M

Backup Job Speed

Query the V$RMAN_BACKUP_JOB_DETAILS view for the rate of backup jobs in an
RMAN session.

The following query shows the backup job speed ordered by session key, which is
the primary key for the RMAN session. The columns in_sec and out_sec
display the data input and output per second.
COL in_sec FORMAT a10
COL out_sec FORMAT a10
COL TIME_TAKEN_DISPLAY FORMAT a10
SELECT SESSION_KEY,
OPTIMIZED,
COMPRESSION_RATIO,
INPUT_BYTES_PER_SEC_DISPLAY in_sec,
OUTPUT_BYTES_PER_SEC_DISPLAY out_sec,
TIME_TAKEN_DISPLAY
FROM V$RMAN_BACKUP_JOB_DETAILS
ORDER BY SESSION_KEY;
The following sample output shows the speed of the backup jobs:
SESSION_KEY OPT COMPRESSION_RATIO IN_SEC OUT_SEC TIME_TAKEN
----------- --- ----------------- ---------- ---------- ----------
9 NO 1 8.24M 8.24M 00:01:14
16 NO 1.3273 6.77M 5.10M 00:01:45
113 NO 1 2.99M 2.99M 00:00:44

Backup Job History

To query details about past and current RMAN jobs:
1. Connect SQL*Plus to the database whose backup history you intend to query.
2. Query the V$RMAN_BACKUP_JOB_DETAILS view for information about the
backup type, status, and start and end time.
The following query shows the backup job history ordered by session key, which
is the primary key for the RMAN session:
COL STATUS FORMAT a9
COL hrs FORMAT 999.99
SELECT SESSION_KEY, INPUT_TYPE, STATUS,
TO_CHAR(START_TIME,'mm/dd/yy hh24:mi') start_time,
TO_CHAR(END_TIME,'mm/dd/yy hh24:mi') end_time,
ELAPSED_SECONDS/3600 hrs
FROM V$RMAN_BACKUP_JOB_DETAILS
ORDER BY SESSION_KEY;
The following sample output shows the backup job history:
SESSION_KEY INPUT_TYPE STATUS START_TIME END_TIME HRS
----------- ------------- --------- -------------- -------------- -------
9 DATAFILE FULL COMPLETED 04/18/07 18:14 04/18/07 18:15 .02
16 DB FULL COMPLETED 04/18/07 18:20 04/18/07 18:22 .03
113 ARCHIVELOG COMPLETED 04/23/07 16:04 04/23/07 16:05 .01

Thursday, April 15, 2010

RMAN-Block Change TRacking

Using Block Change Tracking to Improve Incremental Backup Performance

The block change tracking feature for incremental backups improves backup
performance by recording changed blocks for each datafile.
About Block Change Tracking
If block change tracking is enabled on a primary or standby database, then RMAN
uses a block change tracking file to identify changed blocks for incremental backups.
By reading this small bitmap file to determine which blocks changed, RMAN avoids
having to scan every block in the datafile that it is backing up.
Block change tracking is disabled by default. Nevertheless, the benefits of avoiding full datafile scans during backup are considerable, especially if only a small percentage of data blocks are changed between backups. If your backup strategy involves incremental backups, then block change tracking is recommended. Block change
tracking in no way changes the commands used to perform incremental backups.
The change tracking file requires no maintenance after initial configuration.
You can only enable block change tracking at a physical standby database if a license
for the Oracle Active Data Guard option is enabled.
Space Management in the Block Change Tracking File The change tracking file maintains
bitmaps that mark changes in the datafiles between backups. The database performs a
bitmap switch before each backup. Oracle Database automatically manages space in
the change tracking file to retain block change data that covers the 8 most recent
backups. After the maximum of 8 bitmaps is reached, the oldest bitmap is overwritten
by the bitmap that tracks the current changes.
The first level 0 incremental backup scans the entire datafile. Subsequent incremental backups use the block change tracking file to scan only the blocks that have been marked as changed since the last backup. An incremental backup can be optimized only when it is based on a parent backup that was made after the start of the oldest bitmap in the block change tracking file.

Consider the 8-bitmap limit when developing your incremental backup strategy. For
example, if you make a level 0 database backup followed by 7 differential incremental
backups, then the block change tracking file now includes 8 bitmaps. If you then make
a cumulative level 1 incremental backup, then RMAN cannot optimize the backup
because the bitmap corresponding to the parent level 0 backup is overwritten with the
bitmap that tracks the current changes.
Location of the Block Change Tracking File One block change tracking file is created for the whole database. By default, the change tracking file is created as an Oracle managed file in the destination specified by the DB_CREATE_FILE_DEST initialization
parameter. You can also place the change tracking file in any location you choose, by
specifying its name when enabling block change tracking. Oracle recommends against
using a raw device (that is, a disk without a file system) as a change tracking file.

RMAN does not support backup and recovery of the change tracking file. The
database resets the change tracking file when it determines that the change tracking
file is invalid. If you restore and recover the whole database or a subset, then the
database resets the block change tracking file and starts tracking changes again. After you make a level 0 incremental backup, the next incremental backup is able to use change tracking data.

Size of the Block Change Tracking File The size of the block change tracking file is
proportional to the size of the database and the number of enabled threads of redo.
The size of the block change tracking file can increase and decrease as the database
changes. The size is not related to the frequency of updates to the database.
Typically, the space required for block change tracking for a single instance is
approximately 1/30,000 the size of the data blocks to be tracked. For an Oracle RAC
environment, it is 1/30,000 of the size of the database, times the number of enabled
threads.

The following factors that may cause the file to be larger than this estimate suggests:
To avoid the overhead of allocating space as your database grows, the block
change tracking file size starts at 10 MB. New space is allocated in 10 MB
increments. Thus, for any database up to approximately 300 GB, the file size is no
smaller than 10 MB, for up to approximately 600 GB the file size is no smaller than
20 MB, and so on.
For each datafile, a minimum of 320 KB of space is allocated in the block change
tracking file, regardless of the size of the datafile. Thus, if you have a large number of relatively small datafiles, the change tracking file is larger than for databases with a smaller number of larger datafiles containing the same data.
Enabling and Disabling Block Change Tracking

You can enable block change tracking when the database is either open or mounted.
This section assumes that you intend to create the block change tracking file as an
Oracle Managed File in the database area, which is where the database maintains
active database files such as datafiles, control files, and online redo log files.

To enable block change tracking:
1. Start SQL*Plus and connect to a target database with administrator privileges.
2. Make sure that the DB_CREATE_FILE_DEST initialization parameter is set.
SHOW PARAMETER DB_CREATE_FILE_DEST
If the parameter is not set, and if the database is open, then you can set the
parameter with the following form of the ALTER SYSTEM statement:
ALTER SYSTEM SET DB_CREATE_FILE_DEST = '/disk1/bct/' SCOPE=BOTH SID='*';
3. Enable block change tracking.
Execute the following ALTER DATABASE statement:
ALTER DATABASE ENABLE BLOCK CHANGE TRACKING;

You can also create the change tracking file in a location you choose yourself by
using the following form of SQL statement:
ALTER DATABASE ENABLE BLOCK CHANGE TRACKING
USING FILE '/mydir/rman_change_track.f' REUSE;

The REUSE option tells Oracle Database to overwrite any existing block change
tracking file with the specified name.
Disabling Block Change Tracking
This section assumes that the block change tracking feature is currently enabled. When you disable block change tracking, the database removes the block change tracking file from the operating system.

To disable block change tracking:
1. Start SQL*Plus and connect to a target database with administrator privileges.
2. Ensure that the target database is mounted or open.
3. Disable block change tracking.
Execute the following ALTER DATABASE statement:
ALTER DATABASE DISABLE BLOCK CHANGE TRACKING;
Checking Whether Change Tracking is Enabled
You can query the V$BLOCK_CHANGE_TRACKING view to determine whether change
tracking is enabled, and if it is, the filename of the block change tracking file.
To determine whether change tracking is enabled:

Enter the following query in SQL*Plus (sample output included):
COL STATUS FORMAT A8
COL FILENAME FORMAT A60
SELECT STATUS, FILENAME
FROM V$BLOCK_CHANGE_TRACKING;
STATUS FILENAME
-------- ------------------------------------------------------------
ENABLED /disk1/bct/RDBMS/changetracking/o1_mf_2f71np5j_.chg

Changing the Location of the Block Change Tracking File
To move the change tracking file, use the ALTER DATABASE RENAME FILE
statement. The database must be mounted. The statement updates the control file to
refer to the new location and preserves the contents of the change tracking file. If you cannot shut down the database, then you can disable and enable block change
tracking.
In this case, you lose the contents of the existing block change tracking file.
To change the location of the change tracking file:
1. Start SQL*Plus and connect to a target database.
2. If necessary, determine the current name of the change tracking file:
SQL> SELECT FILENAME FROM V$BLOCK_CHANGE_TRACKING;
3. If possible, shut down the database.
For example:

SQL> SHUTDOWN IMMEDIATE
If you shut down the database, then skip to the next step. If you choose not to shut
down the database, then execute the following SQL statements and skip all
remaining steps:
SQL> ALTER DATABASE DISABLE BLOCK CHANGE TRACKING;
SQL> ALTER DATABASE ENABLE BLOCK CHANGE TRACKING USING FILE 'new_location';
In this case you lose the contents of the block change tracking file. Until the next
time you complete a level 0 incremental backup, RMAN must scan the entire file.
4. Using host operating system commands, move the change tracking file to its new
location.
5. Mount the database and move the change tracking file to a location that has more
space. For example:
ALTER DATABASE RENAME FILE
'/disk1/bct/RDBMS/changetracking/o1_mf_2f71np5j_.chg' TO
'/disk2/bct/RDBMS/changetracking/o1_mf_2f71np5j_.chg';
This statement changes the location of the change tracking file while preserving its
contents.
6. Open the database:
SQL> ALTER DATABASE OPEN;

How frequent shall I backup

When deciding how often to take level 0 backups, a good rule of thumb is to take a
new level 0 backup whenever 20% or more of the data has changed. If the rate of
change to your database is predictable, then you can observe the size of your
incremental backups to determine when a new level 0 is appropriate. The following
SQL query determines the number of blocks written to an incremental level 1 backup
of each datafile with at least 20% of its blocks backed up:
SELECT FILE#, INCREMENTAL_LEVEL, COMPLETION_TIME,
BLOCKS, DATAFILE_BLOCKS
FROM V$BACKUP_DATAFILE
WHERE INCREMENTAL_LEVEL > 0
AND BLOCKS / DATAFILE_BLOCKS > .2
ORDER BY COMPLETION_TIME;

Find who did it using Logminer

Step 1

ALTER SESSION SET NLS_DATE_FORMAT = 'MM/DD/YYYY HH24:MI:SS';

EXECUTE SYS.DBMS_LOGMNR.START_LOGMNR(STARTTIME => '04/13/2010 10:25:00', -
ENDTIME => '04/13/2010 10:45:00', -
OPTIONS => SYS.DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + -
SYS.DBMS_LOGMNR.CONTINUOUS_MINE + -
SYS.DBMS_LOGMNR.NO_SQL_DELIMITER);


Step 2

SQL> SET LINESIZE 132
SQL> SET PAGES 1000
SQL> COL OPERATION FORMAT A10
SQL> COL OBJ_NAME FORMAT A40
SQL> COL SQL_REDO FORMAT A60 WRAP
SQL> COL USERNAME FORMAT A15
SQL> COL SESSION_INFO FORMAT A40 WRAP
SQL>
SQL> SELECT OPERATION, SUBSTR(SEG_OWNER || '.' || SEG_NAME, 1, 40) OBJ_NAME, TIMESTAMP, USERNAME, SESSION#, SERIAL#, SQL_REDO, SESSION_INFO
2 FROM V$LOGMNR_CONTENTS
3 WHERE SEG_OWNER = 'ISA' AND SEG_NAME = 'I2_DISPO_ACTIVITY'
4 AND (OPERATION = 'DDL');

OPERATION OBJ_NAME TIMESTAMP USERNAME SESSION# SERIAL#
---------- ---------------------------------------- ------------------- --------------- ---------- ----------
SQL_REDO SESSION_INFO
------------------------------------------------------------ ----------------------------------------
DDL ISA.I2_DISPO_ACTIVITY 04/13/2010 10:43:47 UNKNOWN 0 0
DROP TABLE EMP


Step 3

EXECUTE DBMS_LOGMNR.END_LOGMNR;


To get the info like username and session info you should enable supplemental logging as below.

Verify supplemental logging is turned on
SQL>SELECT NAME,SUPPLEMENTAL_LOG_DATA_MIN,SUPPLEMENTAL_LOG_DATA_PK,SUPPLEMENTAL_LOG_DATA_UI,SUPPLEMENTAL_LOG_DATA_FK,SUPPLEMENTAL_LOG_DATA_ALL FROM V$DATABASE;

NAME SUPPLEME SUP SUP SUP SUP
--------- -------- --- --- --- ---
CM3P NO NO NO NO NO

Next, you must enable supplemental logging. Supplemental logging places additional column data into the redo log file whenever an UPDATE operation is performed. At the least, minimal database-level supplemental logging must be enabled for any Change Data Capture source database:

SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;

more at :

http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/logminer.htm

Friday, April 9, 2010

Full Backup Vs Incremental

The only difference between a level 0 incremental backup and a full backup is that a
full backup is never included in an incremental strategy. Thus, an incremental level 0 is
a full backup that happens to be the parent of incremental backups whose level is
greater than 0.
A level 1 incremental backup can be either of the following types:
■ A differential incremental backup, which backs up all blocks changed after the
most recent incremental backup at level 1 or 0
■ A cumulative incremental backup, which backs up all blocks changed after the
most recent incremental backup at level 0
Incremental backups are differential by default.

Note: Cumulative backups are preferable to differential backups
when recovery time is more important than disk space, because
fewer incremental backups need to be applied during recovery.

RMAN Backup multiplexing

Multiplexed Backup Sets
When creating backup sets, RMAN can simultaneously read multiple files from disk
and then write their blocks into the same backup set. For example, RMAN can read
from two datafiles simultaneously, and then combine the blocks from these datafiles
into a single backup piece. The combination of blocks from multiple files is called
backup multiplexing. Image copies, by contrast, are never multiplexed.

RMAN multiplexing is determined by several factors. For example, the FILESPERSET
parameter of the BACKUP command determines how many datafiles to put in each
backup set. The MAXOPENFILES parameter of ALLOCATE CHANNEL or CONFIGURE
CHANNEL defines how many datafiles RMAN can read from simultaneously. The basic
multiplexing algorithm is as follows:
■ Number of files in each backup set
This number is the minimum of the FILESPERSET setting and the number of files
read by each channel. The FILESPERSET default is 64.
■ The level of multiplexing
This is the number of input files simultaneously read and then written into the
same backup piece. The level of multiplexing is the minimum of MAXOPENFILES
and the number of files in each backup set. The MAXOPENFILES default is 8

RMAN backup compression

Block Compression for Backup Sets
RMAN uses two types of block compression when creating backup sets:
1. Unused Block Compression (Supports disk backup and OSB tape backup)
2. Null Block Compression (Supports all backups)

Unused Block Compression
During unused block compression, RMAN does not check each block. Instead,
RMAN reads the bitmaps that indicate what blocks are currently allocated and then
only reads the blocks that are currently allocated.
Unused block compression is turned on automatically when all of the following five
conditions are true:
1. The COMPATIBLE initialization parameter is set to 10.2 or higher.
2. There are currently no guaranteed restore points defined for the database.
3. The datafile is locally managed.
4. The datafile is being backed up to a backup set as part of a full backup or a level 0 incremental backup.
5. The backup set is created on disk or Oracle Secure Backup is the media manager.

Null Block Compression
During null block compression, RMAN checks every block to see if it has ever
contained data. Blocks that have never contained data are not backed up. Blocks that
have contained data, either currently or at some point in time in the past, are backed up.

Binary Compression for Backup Sets
RMAN supports binary compression of backup sets. Backup compression is only
enabled when you specify AS COMPRESSED BACKUPSET in BACKUP command, or
one-time with: CONFIGURE DEVICE TYPE BACKUP TYPE TO
COMPRESSED BACKUPSET.
You have two binary compression options:

You can use the BASIC compression algorithm which does not require the
Advanced Compression Option. This setting offers a compression ratio
comparable to MEDIUM, at the expense of additional CPU consumption.

current sid and session info

select * from v$session where sid=(select distinct sid from v$mystat);

Thursday, April 8, 2010

Exclude tablespaces in Backup

For example, you can exclude testing tablespaces cwmlite and example from whole
database backups as follows:
CONFIGURE EXCLUDE FOR TABLESPACE DATA_01;
CONFIGURE EXCLUDE FOR TABLESPACE INDEX_01;


If you run the following command, then RMAN backs up all tablespaces in the
database except DATA_01 and DATA_02:
BACKUP DATABASE;

11g Restore points

Normal vs Guaranteed restore points:

Guaranteed Restore Points
Like a normal restore point, a guaranteed restore point serves as an alias for an SCN
in recovery operations. A principal difference is that guaranteed restore points never
age out of the control file and must be explicitly dropped. In general, you can use a
guaranteed restore point as an alias for an SCN with any command that works with a
normal restore point.

If the recovery area has enough disk space to store the needed logs, then you can use a
guaranteed restore point to rewind a whole database to a known good state days or
weeks ago. As with Flashback Database, even the effects of NOLOGGING operations
like direct load inserts can be reversed with guaranteed restore points.

Limitations that apply to Flashback Database also apply to
guaranteed restore points


To create a restore point:
1. Connect SQL*Plus to a target database.
2. Ensure that the database is open or mounted. If the database is mounted, then it
must have been shut down cleanly (unless it is a physical standby database).
3. Run the CREATE RESTORE POINT statement.
The following example shows how to create a normal restore point in SQL*Plus:
SQL> CREATE RESTORE POINT before_upgrade;
This example shows how to create a guaranteed restore point:
SQL> CREATE RESTORE POINT before_upgrade GUARANTEE FLASHBACK DATABASE;

FLASH BACK DATABASE-CONS

Limitations of Flashback Database Because Flashback Database works by undoing
changes to the datafiles that exist at the moment that you run the command, it has the
following limitations:
■ Flashback Database can only undo changes to a datafile made by an Oracle
database. It cannot be used to repair media failures, or to recover from accidental
deletion of datafiles.
■ You cannot use Flashback Database to undo a shrink datafile operation. However,
you can take the shrink file offline, flashback the rest of the database, and then
later restore and recover the shrink datafile
■ You can not use Flashback Database alone to get back a dropped datafile. If you
flashback a database to a time when a dropped datafile existed in the database,
Oracle will only add the datafile entry in the controlfile. You can recover the
dropped datafile by further restoring and recovering the datafile.
■ If the database control file is restored from backup or re-created, all accumulated
flashback log information is discarded. You cannot use FLASHBACK DATABASE
to return to a point in time before the restore or re-creation of a control file
■ When using Flashback Database with a target time at which a NOLOGGING
operation was in progress, block corruption is likely in the database objects and
datafiles affected by the NOLOGGING operation. For example, if you perform a
direct-path INSERT operation in NOLOGGING mode, and that operation runs from
9:00 to 9:15 on April 3, 2005, and you later need to use Flashback Database to
return to the target time 09:07 on that date, the objects and datafiles updated by
the direct-path INSERT may be left with block corruption after the Flashback
Database operation completes.
If possible, avoid using Flashback Database with a target time or SCN that coincides
with a NOLOGGING operation. Also, perform a full or incremental backup of the
affected datafiles immediately after any NOLOGGING operation to ensure
recoverability to points in time after the operation. If you expect to use Flashback
Database to return to a point in time during an operation such as a direct-path
INSERT, consider performing the operation in LOGGING mode.


if "flashback buf free by RVWR" is the top wait event, then you know
that Oracle Database cannot write flashback logs very quickly. In such a case, you may want to tune the file system and storage used by the fast recovery area


The V$FLASHBACK_DATABASE_STAT view shows the bytes of flashback data logged
by the database. Each row in the view shows the statistics accumulated (typically over
the course of an hour). The FLASHBACK_DATA and REDO_DATA columns describe
bytes of flashback data and redo data written respectively during the time interval,
while the DB_DATA column describe bytes of data blocks read and written. The
columns FLASHBACK_DATA and REDO_DATA correspond to sequential writes, while
DB_DATA corresponds to random reads and writes.
Because of the difference between sequential I/O and random I/O, a better indication
of I/O overhead is the number of I/O operations issued for flashback logs

V$SYSSTAT Statistics
Column Name Column Meaning
Physical write I/O request The number of write operations issued for writing data blocks
physical read I/O request The number of read operations issued for reading data blocks
redo writes The number of write operations issued for writing to the redo
flashback log writes The number of write operations issued for writing to flashback
logs.
log.

DELETE INPUT VS DELETE ALL INPUT

If you specify DELETE INPUT (without ALL), then RMAN deletes only the copy that it backs up.

If you specify ALL, then RMAN deletes all copies of the specified logs that it finds in the V$ARCHIVED_LOG view.

Ex:
The BACKUP ... DELETE INPUT command can delete archived redo logs, datafile copies, and backup sets after backing them up. This functionality is especially useful when backing up archived logs on disk to tape. RMAN backs up one and only one copy of each log sequence number, and then deletes the file that it backs up. For example, assume that you issue:

BACKUP ARCHIVELOG ALL DELETE INPUT;

In this command, RMAN backs up one copy of each log for each available sequence number, and then deletes only the file that it actually backs up.

If you specify the DELETE ALL INPUT option, then RMAN deletes whichever files match the criteria that you specify, even if there are several files of the same log sequence number. For example, assume that you archive to 3 different directories. Then, you issue this command:

BACKUP ARCHIVELOG ALL FROM SEQUENCE 1200 DELETE ALL INPUT;

In this case, RMAN backs up only one copy of each log sequence between 1200 and the most recent sequence, but deletes all logs with these sequence numbers contained in the three archive destinations.

The archived log failover feature means that RMAN searches every enabled archiving destination for good copies of a log sequence number. For example, assume that /log1 and /log2 are the only enabled archiving destinations, and that they contain the same sequence number. You run this command:

BACKUP ARCHIVELOG FROM SEQUENCE 123 DELETE ALL INPUT;

RMAN can start reading from any enabled archiving directory. For example, assume RMAN starts in directory /log1 and finds log_123.f there. Then, if RMAN discovers that log_124.f is corrupt, it searches in /log2 for a good copy of this log. Because DELETE ALL INPUT is specified, RMAN deletes all copies of logs on disk of sequence 123 and higher.

Wednesday, April 7, 2010

how many channels should we allocate?

Determining Channel Parallelism to Match Hardware Devices

RMAN can perform the I/O required for many commands in parallel, to make optimal use of your hardware resources. To perform I/O in parallel, however, the I/O must be associated with a single RMAN command, not a series of commands. For example, if backing up three datafiles, issue the command

BACKUP DATAFILE 5,6,7;

rather than issuing the commands

BACKUP DATAFILE 5;
BACKUP DATAFILE 6;
BACKUP DATAFILE 7;

When all three datafiles are backed up in one command, RMAN recognizes the opportunity for parallelism and can use multiple channels to do the I/O in parallel. When three separate commands are used, RMAN can only perform the backups one at a time, regardless of available channels and I/O devices.

The number of channels available (whether allocated in a RUN block or configured in advance) for use with a device at the moment that you run a command determines whether RMAN will read from or write to that device in parallel while carrying out the command. Failing to allocate the right number of channels adversely affects RMAN performance during I/O operations.

As a rule, the number of channels used in carrying out an individual RMAN command should match the number of physical devices accessed in carrying out that command. If manually allocating channels for a command, allocate one for each device; if configuring automatic channels, configure the PARALLELISM setting appropriately.

When backing up to tape, you should allocate one channel for each tape drive. When backing up to disk, allocate one channel for each physical disk, unless you can optimize the backup for your disk topography by using multiple disk channels. Each manually allocated channel uses a separate connection to the target or auxiliary database.

The following script creates three backups sequentially: three separate BACKUP commands are used to back up one file each. Only one channel is active at any one time because only one file is being backed up in each command.

Tuesday, April 6, 2010

How much FAST RECOVERY AREA?

use the following formula to estimate the disk quota, where n is the interval in days
between incremental updates and y is the delay in applying the foreign archived redo
logs on a logical standby database:

Disk Quota =
Size of a copy of database +
Size of an incremental backup +
Size of (n+1) days of archived redo logs +
Size of (y+1) days of foreign archived redo logs (for logical standby) +
Size of control file +
Size of an online redo log member * number of log groups +
Size of flashback logs (based on DB_FLASHBACK_RETENTION_TARGET value)


To determine the optimum size for the fast recovery area:
1. If you plan to use flashback logging or guaranteed restore points, then query
V$ARCHIVED_LOG to determine how much redo the database generates in the
time to which you intend to set DB_FLASHBACK_RETENTION_TARGET.


If flashback logging is enabled, then run the database under a normal workload
for the time period specified by DB_FLASHBACK_RETENTION_TARGET.
In this way, the database can generate a representative sample of flashback logs.

Query the V$FLASHBACK_DATABASE_LOG view as follows:
SELECT ESTIMATED_FLASHBACK_SIZE
FROM V$FLASHBACK_DATABASE_LOG;

The result is an estimate of the disk space needed to meet the current flashback
retention target, based on the database workload since Flashback Database was
enabled.


Oracle recommends that DB_RECOVERY_FILE_DEST be set to a different value from
DB_CREATE_FILE_DEST or any of the DB_CREATE_ONLINE_LOG_DEST_n
initialization parameters.

Friday, April 2, 2010

VALIDATE RMAN BACKUP

to validate all database files and
archived redo log files for physical and logical corruption:

RMAN> BACKUP VALIDATE CHECK LOGICAL DATABASE ARCHIVELOG ALL;

You can also use the VALIDATE command to individual data blocks:

RMAN> VALIDATE DATAFILE 4 BLOCK 10 TO 13;

You can also validate backup sets:

RMAN>VALIDATE BACKUPSET 3;

RMAN CATALOG BACKUPS

1)Catalog an archive log

RMAN> CATALOG ARCHIVELOG '/LOC/arc01.arc', '/LOC/arc02.arc';

2)Catalog a file

RMAN>CATALOG DATAFILECOPY '/LOC/DATAFILE01.dbf' LEVEL 0;

3)Catalog multiple copies in a directory

RMAN>CATALOG START WITH '/LOC/BACKUPDIR';

4)Catalog files in the flash recovery area:

RMAN>CATALOG RECOVERY AREA ;

5)Catalog backup pieces:
RMAN>CATALOG BACKUPPIECE '/LOC/PEICENAME';
EX:-
RMAN> CATALOG START WITH '+disk'; # catalog all files from an ASM disk group RMAN> CATALOG START WITH '/fs1/datafiles/'; # catalog all files in directory


RMAN> catalog start with '+RESTOREDB_FRA01/RESTOREDBDUP/ARCHIVELOG/2011_12_06';

starting full resync of recovery catalog
full resync complete
searching for all files that match the pattern +RESTOREDB_FRA01/RESTOREDBDUP/ARCHIVELOG/2011_12_06

List of Files Unknown to the Database
=====================================
File Name: +RESTOREDB_fra01/RESTOREDBDUP/ARCHIVELOG/2011_12_06/thread_3_seq_165457.3992.769173241


Do you really want to catalog the above files (enter YES or NO)? YES
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: +RESTOREDB_fra01/RESTOREDBDUP/ARCHIVELOG/2011_12_06/thread_3_seq_165457.3992.769173241
File Name: +RESTOREDB_fra01/RESTOREDBDUP/ARCHIVELOG/2011_12_06/thread_1_seq_145279.1991.769173481


RMAN>


Uncatalog Backup

RMAN>CHANGE ARCHIVELOG ALL UNCATALOG;

RMAN>CHANGE BACKUP OF TABLESPACE DATA01 UNCATALOG;

To uncatalog a backuppiece name,
RMAN>CHANGE BACKUPPIECE '/LOC/PEICENAME' UNCATALOG;