Experience: is what you get soon after you need it.

Experience: is what you get soon after you need it.

Rasul Allah (sal Allahu alaihi wa sallam) said: "Restore the trusts of those who trust you, and deal not falsely with him who deals falsely with you." [Abu Dawud, Tirmidhi]

Search This Blog

Monday, December 6, 2010

Unregister database from Recovery Catalog

In Recovery Catalog Database:
If the target database already deleted/dropped/decommissioned or not available
Then in 9i/10g
SQL> alter session set current_schema=RCAT10G_OTH;

Session altered.

SQL> execute dbms_rcvcat.unregisterdatabase(1,129982387);

PL/SQL procedure successfully completed.

If Target database is available and UP, Then in 10g

rman>connect target id/pwd@dbname
rman>connect catalog id/pwd@catalogdb

RMAN> UNREGISTER DATABASE ;

Thursday, October 28, 2010

Archive log switches frequency

set linesize 1000
col MidN format 999
col 1AM format 999
col 2AM format 999
col 3AM format 999
col 4AM format 999
col 5AM format 999
col 6AM format 999
col 7AM format 999
col 8AM format 999
col 9AM format 999
col 10AM format 999
col 11AM format 999
col Noon format 999
col 1PM format 999
col 2PM format 999
col 3PM format 999
col 4PM format 999
col 5PM format 999
col 6PM format 999
col 7PM format 999
col 8PM format 999
col 9PM format 999
col 10PM format 999
col 11PM format 999
select to_char(first_time,'mm/dd/yy') logdate,
sum(decode(to_char(first_time,'hh24'),'00',1,0)) "MidN",
sum(decode(to_char(first_time,'hh24'),'01',1,0)) "1AM",
sum(decode(to_char(first_time,'hh24'),'02',1,0)) "2AM",
sum(decode(to_char(first_time,'hh24'),'03',1,0)) "3AM",
sum(decode(to_char(first_time,'hh24'),'04',1,0)) "4AM",
sum(decode(to_char(first_time,'hh24'),'05',1,0)) "5AM",
sum(decode(to_char(first_time,'hh24'),'06',1,0)) "6AM",
sum(decode(to_char(first_time,'hh24'),'07',1,0)) "7AM",
sum(decode(to_char(first_time,'hh24'),'08',1,0)) "8AM",
sum(decode(to_char(first_time,'hh24'),'09',1,0)) "9AM",
sum(decode(to_char(first_time,'hh24'),'10',1,0)) "10AM",
sum(decode(to_char(first_time,'hh24'),'11',1,0)) "11AM",
sum(decode(to_char(first_time,'hh24'),'12',1,0)) "Noon",
sum(decode(to_char(first_time,'hh24'),'13',1,0)) "1PM",
sum(decode(to_char(first_time,'hh24'),'14',1,0)) "2PM",
sum(decode(to_char(first_time,'hh24'),'15',1,0)) "3PM",
sum(decode(to_char(first_time,'hh24'),'16',1,0)) "4PM",
sum(decode(to_char(first_time,'hh24'),'17',1,0)) "5PM",
sum(decode(to_char(first_time,'hh24'),'18',1,0)) "6PM",
sum(decode(to_char(first_time,'hh24'),'19',1,0)) "7PM",
sum(decode(to_char(first_time,'hh24'),'20',1,0)) "8PM",
sum(decode(to_char(first_time,'hh24'),'21',1,0)) "9PM",
sum(decode(to_char(first_time,'hh24'),'22',1,0)) "10PM",
sum(decode(to_char(first_time,'hh24'),'23',1,0)) "11PM"
from v$log_history
group by to_char(first_time,'mm/dd/yy')
order by 1
/

Picked up the above from google:

Size of the redo log file:

SQL> SELECT distinct(to_char((bytes*0.000001),'9990.999')) size_mb
FROM v$log;
2
SIZE_MB
---------
1073.742


size & number of redologs by day:

SELECT trunc(first_time) DAY,
count(*) NB_SWITCHS,
trunc(count(*)*log_size/1024/1024/1024) TOTAL_SIZE_GB,
to_char(count(*)/24,'9999.9') AVG_SWITCHS_PER_HOUR
FROM v$loghist,
(select avg(bytes) log_size from v$log) GROUP BY trunc(first_time),log_size
order by 1 desc
/

Monday, October 25, 2010

ld: fatal: library -lclntsh: not found during 10.2.0.5 install

INFO: Linking /cti01/u0001/oracle/product/10.2.0/asm/precomp/lib/proc
INFO: ld: fatal: library -lclntsh: not found
INFO: ld: fatal: File processing errors. No output written to /cti01/u0001/oracle/product/10.2.0/asm/precomp/lib/proc
INFO: make: Fatal error: Command failed for target `/cti01/u0001/oracle/product/10.2.0/asm/precomp/lib/proc'


You are installing 10.2.0.5 on SunOS64bit servers and get one of the above relinking errors. To overcome this issues use the below work around as per support...

Make sure this is for 10.2.0.5

1. Backup the Oracle_Home (as directed in the Patch Set Notes)

remove the 10205 installation u just did if not start from fresh
install 10.2.0.1 Then

2. Delete these files ...
$ cd $ORACLE_HOME
$ rm lib/libncds10.a
rm network/install/ldflags.cds
rm lib/libndce10.a
rm network/install/ldflags.dce

3. Apply the 10.2.0.5 patchset

Friday, October 22, 2010

Read Alert log file in 11G

in 11g there is no alert.log instead there is alert.xml and view the contents of this file Oracle provides a utility called ADRI see below on how to use this utility:

HOST: /rman01/u0001/oracle/diag/rdbms/adm1p/ADM1P/alert
>adrci

ADRCI: Release 11.2.0.1.0 - Production on Fri Oct 22 09:35:01 2010

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

ADR base = "/rman01/u0001/oracle"
adrci>

adrci> help

HELP [topic]
Available Topics:
CREATE REPORT
ECHO
EXIT
HELP
HOST
IPS
PURGE
RUN
SET BASE
SET BROWSER
SET CONTROL
SET ECHO
SET EDITOR
SET HOMES | HOME | HOMEPATH
SET TERMOUT
SHOW ALERT
SHOW BASE
SHOW CONTROL
SHOW HM_RUN
SHOW HOMES | HOME | HOMEPATH
SHOW INCDIR
SHOW INCIDENT
SHOW PROBLEM
SHOW REPORT
SHOW TRACEFILE
SPOOL

There are other commands intended to be used directly by Oracle, type
"HELP EXTENDED" to see the list

adrci> show alert

ADR Home = /rman01/u0001/oracle/diag/rdbms/adm1p/ADM1P:
*************************************************************************
Output the results to file: /tmp/alert_7353_1_ADM1P_1.ado
"/tmp/alert_7353_1_ADM1P_1.ado" 2102 lines, 108992 characters
2010-08-25 16:59:50.107000 -04:00
Adjusting the default value of parameter parallel_max_servers

Tuesday, October 19, 2010

RMAN Delete old archivelogs

delete force noprompt archivelog all completed before 'sysdate-7';

If you want to delete archive logs older than few hours:

Verify the time like how many hours older than you want to delete.

SQL> alter session set nls_date_format='MM/DD/YYYY HH24:MI:SS';

Session altered.

SQL> select sysdate-1/11 from dual;


SYSDATE-1/11
-------------------
09/22/2011 08:33:05

SQL> SQL>
SQL> select sysdate from dual;

SYSDATE
-------------------
09/22/2011 10:44:15

SQL>

So sysdate-1/11 will delete the files older than 8:30 am.

RMAN>connect target /
delete noprompt archivelog until time '(sysdate-1/11)' backed up 1 times to device type sbt_tape;

or

RMAN> delete force noprompt obsolete;
RMAN> delete force noprompt expired;
cheers.

Monday, October 11, 2010

lib/libclntsh.a(shr.o) could not be loaded

User other than oracle was trying to access the libraries and getting the below errors:

$exec(): 0509-036 Cannot load program FastReader because of the following errors:
0509-150 Dependent module $ORACLE_HOME/lib/libclntsh.a(shr.o) could not be loaded.
0509-022 Cannot load module $ORACLE_HOME/lib/libclntsh.a(shr.o).
0509-026 System error: The file access permissions do not allow the specified action.

Fix:
in 10g Oracle introduced strong security in accessing the oracle binaries, so to flex this security run the below command:

cd $ORACLE_HOME/install
>./changePerm.sh

-------------------------------------------------------------------------------
Disclaimer: The purpose of this script is to relax permissions on some of the
files in the database Oracle Home so that all clients can access them.
Please note that Oracle Corporation recommends using the most restrictive file
permissions as possible for your given implementation. Running this script
should be done only after considering all security ramifications.
-------------------------------------------------------------------------------

-n Do you wish to continue (y/n) [n]:
y
Spooling the error log /tmp/changePerm_err.log...
Finished running the script successfully


This will do the job... cheers

Monday, August 30, 2010

Recreate SPFILE dropped accidently from ASM

Today we had a situation where user accidentally dropped the SPFILE from ASM diskgroup on RAC.

Since this is RAC we had the pfile pointing to ASM diskgroup not a physical file.

Method I used to retore the SPFILE.

RMAN> list backup of SPFILE;
List of Backup Pieces for backup set 11456205 Copy #1
BP Key Pc# Status Piece Name
------- --- ----------- ----------
11456219 1 AVAILABLE +DB_FRA_01/DBNAME/autobackup/2010_08_30/s_728389367.386.728389369

Backup Set Copy #2 of backup set 11456205
Device Type Elapsed Time Completion Time Compressed Tag
----------- ------------ ------------------- ---------- ---
SBT_TAPE 00:00:01 08/30/2010 11:47:18 NO TAG20100830T102247

List of Backup Pieces for backup set 11456205 Copy #2
BP Key Pc# Status Media Piece Name
------- --- ----------- ----------------------- ----------
11458334 1 AVAILABLE 202210 c-3704148265-20100830-02


Above clearly shows we have the copy SPFILE in disk and in TAPE and I use the disk rather tape copy for restore.

RMAN> Restore spfile to ',location> from <+DB_FRA_01/DBNAME/autobackup/2010_08_30/s_728389367.386.728389369>;

Restore Complete

SQL> create pfile='/export/home/oracle/sshaik/shaik1.ora' from spfile='/export/home/oracle/sshaik/sshaik.ora';

File created.



SQL> create spfile='+DB_DATA_01/DBNAME/spfileaadp1.ora' from pfile='/export/home/oracle/sshaik/sshaik1.ora';

File created.


Now bounce the instance to see the new created SPFILE :-)

Thursday, August 19, 2010

RAC Best Practices

For RAC newbies and who are eager to know more about RAC and the best practices follow the below metalink doc.


RAC Assurance Support Team: RAC and Oracle Clusterware Starter Kit and Best Practices (Generic) [ID 810394.1]

Configuring Diagwait & Miss count in 10g RAC

To set the diagwait in RAC:
1)shutdown RAC database if running
srvctl stop database -d
2) shutdown ASM if running
srvctl stop asm -n
login as root now:
3) shutdown crs
root@fwrkqrdb01cdp # /fwrk03/u0001/oracle/product/crs/bin/crsctl stop crs
Stopping resources. This could take several minutes.
Successfully stopped CRS resources.
Stopping CSSD.
Shutting down CSS daemon.
Shutdown request successfully issued.

root@# /fwrk03/u0001/oracle/product/crs/bin/crsctl get css diagwait
Configuration parameter diagwait is not defined.

root@fwrkqrdb01cdp # /fwrk03/u0001/oracle/product/crs/bin/crsctl start crs
Attempting to start CRS stack
The CRS stack will be started shortly

root@fwrkqrdb01cdp # /fwrk03/u0001/oracle/product/crs/bin/crsctl set css diagwait 13 -force
Configuration parameter diagwait is now set to 13.
root@fwrkqrdb01cdp # /fwrk03/u0001/oracle/product/crs/bin/crsctl get css diagwait
13
root@fwrkqrdb01cdp #

Refer to the below doc for the latest:
10g RAC: Steps To Increase CSS Misscount, Reboottime and Disktimeout [ID 284752.1]
set css misscount
Misscount: Network Latencies in second from node-to-node (Interconnect)

root@fwrkqrdb01cdp # /fwrk03/u0001/oracle/product/crs/bin/crsctl set css misscount 45
Configuration parameter misscount is now set to 45.

fwrkqrdb02cdp(CRS) /export/home/oracle
>crsctl get css misscount
45

Monday, July 26, 2010

Find Oracle Binaries PATH and Oracle bit version

If the below output shows 64-bit then the installed oracle bit version is 64 else it will say 32-bit for 32-bit install

Host>file $ORACLE_HOME/bin/oracl*
/adm02/u0001/oracle/product/10.1.4/apps_1/bin/oracle: ELF 64-bit MSB executable SPARCV9 Version 1, dynamically linked, not stripped


what if I don't know where oracle is installed
1)
>ps -aef| grep tns
this is will you the path from where listener is running
ex:
>ps -aef| grep tns
oracle 7932 1 0 Oct 30 ? 11795:52 /adm02/u0001/oracle/product/10.2.0/db_1/bin/tnslsnr adm1p -inherit

so my ORACLE_HOME is /adm02/u0001/oracle/product/10.2.0/db_1

2) SELECT substr(file_spec,1,instr(file_spec,'lib')-2) ORACLE_HOME FROM dba_libraries
WHERE library_name='DBMS_SUMADV_LIB';

ex:
SQL> SELECT substr(file_spec,1,instr(file_spec,'lib')-2) ORACLE_HOME FROM dba_libraries
WHERE library_name='DBMS_SUMADV_LIB';
2
ORACLE_HOME
--------------------------------------------------------------------------------
/adm02/u0001/oracle/product/10.2.0/db_1

Wednesday, July 21, 2010

RMAN Full DISK BACKUP

labrman01(RCAT) /ora_backup/datafile
>rman

Recovery Manager: Release 10.2.0.4.0 - Production on Wed Jul 21 10:38:33 2010

Copyright (c) 1982, 2007, Oracle. All rights reserved.

RMAN> connect target sys/syspwd@HYPP

connected to target database: HYPP (DBID=2563884143)

RMAN> connect catalog rcat10g/rcat10g@RCAT

connected to recovery catalog database

RMAN> run{
allocate channel ch01 type disk format '/ora_backup/backups/HYPP/HYPP_Full_%u_%s_%U';
backup database include current controlfile;
sql 'alter system archive log current';
backup archivelog all delete input;
}

released channel: ORA_DISK_1
allocated channel: ch01
channel ch01: sid=1079 devtype=DISK

Starting backup at 21-JUL-10
channel ch01: starting full datafile backupset
channel ch01: specifying datafile(s) in backupset
input datafile fno=00006 name=/adm02/u0001/oracle/datafile/HYPP/hypp_data_01.dbf
input datafile fno=00002 name=/adm02/u0001/oracle/datafile/HYPP/undo_01.dbf
input datafile fno=00008 name=/adm02/u0001/oracle/datafile/HYPP/hypp_bi_01.dbf
input datafile fno=00001 name=/adm02/u0001/oracle/datafile/HYPP/system01.dbf
input datafile fno=00003 name=/adm02/u0001/oracle/datafile/HYPP/sysaux01.dbf
input datafile fno=00009 name=/adm02/u0001/oracle/datafile/HYPP/hypp_apps_01.dbf
input datafile fno=00004 name=/adm02/u0001/oracle/datafile/HYPP/users01.dbf
input datafile fno=00005 name=/adm02/u0001/oracle/datafile/HYPP/tools_01.dbf
input datafile fno=00010 name=/adm02/u0001/oracle/datafile/HYPP/hyp_hal_data_01.dbf
input datafile fno=00007 name=/adm02/u0001/oracle/datafile/HYPP/hypp_idx_01.dbf
channel ch01: starting piece 1 at 21-JUL-10
hannel ch01: finished piece 1 at 21-JUL-10
piece handle=/ora_backup/backups/HYPP/HYPP_Full_tnljb89m_6071_tnljb89m_1_1 tag=TAG20100721T104925 comment=NONE
channel ch01: backup set complete, elapsed time: 00:01:05
channel ch01: starting full datafile backupset
channel ch01: specifying datafile(s) in backupset
including current control file in backupset
channel ch01: starting piece 1 at 21-JUL-10
channel ch01: finished piece 1 at 21-JUL-10
piece handle=/ora_backup/backups/HYPP/HYPP_Full_toljb8bn_6072_toljb8bn_1_1 tag=TAG20100721T104925 comment=NONE
channel ch01: backup set complete, elapsed time: 00:00:02
Finished backup at 21-JUL-10

Starting Control File Autobackup at 21-JUL-10
piece handle=/adm02/u0001/oracle/product/10.2.0/db_1/dbs/c-2563884143-20100721-01 comment=NONE
Finished Control File Autobackup at 21-JUL-10

sql statement: alter system archive log current

Starting backup at 21-JUL-10
current log archived
channel ch01: starting archive log backupset
channel ch01: specifying archive log(s) in backup set
input archive log thread=1 sequence=1 recid=1673 stamp=724874462
input archive log thread=1 sequence=2 recid=1674 stamp=724934781
input archive log thread=1 sequence=3 recid=1675 stamp=724934785
input archive log thread=1 sequence=4 recid=1676 stamp=724935048
input archive log thread=1 sequence=5 recid=1677 stamp=724935052
channel ch01: starting piece 1 at 21-JUL-10
channel ch01: finished piece 1 at 21-JUL-10
piece handle=/ora_backup/backups/HYPP/HYPP_Full_tqljb8cg_6074_tqljb8cg_1_1 tag=TAG20100721T105055 comment=NONE
channel ch01: backup set complete, elapsed time: 00:00:08
channel ch01: deleting archive log(s)
archive log filename=/adm02/u8001/HYPP/arch/arch_HYPQ_0001_0000000001_0724872404.arc recid=1673 stamp=724874462
archive log filename=/adm02/u8001/HYPP/arch/arch_HYPQ_0001_0000000002_0724872404.arc recid=1674 stamp=724934781
archive log filename=/adm02/u8001/HYPP/arch/arch_HYPQ_0001_0000000003_0724872404.arc recid=1675 stamp=724934785
archive log filename=/adm02/u8001/HYPP/arch/arch_HYPQ_0001_0000000004_0724872404.arc recid=1676 stamp=724935048
archive log filename=/adm02/u8001/HYPP/arch/arch_HYPQ_0001_0000000005_0724872404.arc recid=1677 stamp=724935052
Finished backup at 21-JUL-10

Starting Control File Autobackup at 21-JUL-10
piece handle=/adm02/u0001/oracle/product/10.2.0/db_1/dbs/c-2563884143-20100721-02 comment=NONE
Finished Control File Autobackup at 21-JUL-10
released channel: ch01

RMAN>

Create RMAN recovery catalog

labrman01(RCAT) /ora_backup/datafile
>sqlplus / as sysdba

SQL*Plus: Release 10.2.0.4.0 - Production on Wed Jul 21 10:31:55 2010

Copyright (c) 1982, 2007, Oracle. All Rights Reserved.
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> create user RCAT10G identified by rcat10g default tablespace RCAT;

User created.


SQL> grant connect,resource,recovery_catalog_owner to rcat10g;

Grant succeeded.

SQL>exit

labrman01(RCAT) /ora_backup/datafile
>rman

Recovery Manager: Release 10.2.0.4.0 - Production on Wed Jul 21 10:36:20 2010

Copyright (c) 1982, 2007, Oracle. All rights reserved.


RMAN> connect catalog rcat10g/rcat10g@RCAT

connected to recovery catalog database

RMAN> create catalog

recovery catalog created

RMAN> exit


Recovery Manager complete.
labrman01(RCAT) /ora_backup/datafile
>rman

Recovery Manager: Release 10.2.0.4.0 - Production on Wed Jul 21 10:38:33 2010

Copyright (c) 1982, 2007, Oracle. All rights reserved.

RMAN> connect target sys/syspwd@HYPP

connected to target database: HYPP (DBID=2563884143)

RMAN> connect catalog rcat10g/rcat10g@RCAT

connected to recovery catalog database

RMAN> register database;

database registered in recovery catalog
starting full resync of recovery catalog
full resync complete

RMAN>exit

Thursday, July 8, 2010

install 11g 11.2.0 client Ubuntu

It is not straight forward anything with Linux/Unix, so I will share my experience with installing 11g client on Ubuntu.

ser1@user1-desktop:~$ uname -a
Linux user1-desktop 2.6.31-22-generic #60-Ubuntu SMP Thu May 27 00:22:23 UTC 2010 i686 GNU/Linux

downloaded 11gR2 client for Linux-x86
user1@user1-desktop:~/Downloads/linux1$ ls -lrt
drwxr-xr-x 6 user1 user1 4096 2009-08-18 15:51 client
-rw-r--r-- 1 user1 user1 642016988 2010-07-07 15:23 linux_11gR2_client.zip

Unzip it and goto client dir

Before you start the install prep the PC with necessary Kernel parameters
copy the below contents into the sysctl.conf file as root.

# vi /etc/sysctl.conf

kernel.shmall = 2097152
kernel.shmmax = 2147483648
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
net.core.rmem_default = 262144
net.core.rmem_max = 262144
net.core.wmem_default = 262144
net.core.wmem_max = 262144
fs.file-max = 65536
net.ipv4.ip_local_port_range = 1024 65000

set the kernel parameters or reboot
# /sbin/sysctl -p

set security limits
# vi /etc/security/limits.conf
append the following values to the end of the file
* soft nproc 2047
* hard nproc 16384
* soft nofile 1024
* hard nofile 65536

create symbolic links
# ln -s /usr/bin/awk /bin/awk
# ln -s /usr/bin/rpm /bin/rpm
# ln -s /lib/libgcc_s.so.1 /lib/libgcc_s.so
# ln -s /usr/bin/basename /bin/basename

Now

start runInstaller
opted administrator install
install went on for a while then boom "started throwing linking errors"

If a dialog pops up indicating:

Error in invoking target install of makefile
/app/user1/product/11.2.0/client_1/rdbms/lib/ins_rdbms.mk

edit user1@user1-desktop:~/app/user1/product/11.2.0/client_1/rdbms/lib/env_rdbms.mk and change:

LINK=$(PURECMDS) gcc $(LDFLAGS) $(COMPSOBJS)

to:

LINK=$(PURECMDS) gcc296 $(LDFLAGS) $(COMPSOBJS)

and change:

ORACLE_LINKER=$(ADE_DEL_FILE_CMD) $(PURECMDS) gcc $(OLAPPRELINKOPTS) $(LDFLAGS) $(COMPSOBJS)

to:

ORACLE_LINKER=$(ADE_DEL_FILE_CMD) $(PURECMDS) gcc296 $(OLAPPRELINKOPTS) $(LDFLAGS) $(COMPSOBJS)

then click Retry or just say ignore

and the install went fine like a charm and asked me to run the root.sh scripts which I did run
and added the environmental variables like ORACLE_HOME in .bashrc file

and invoked sqlplus

user1@user1-desktop:~$ sqlplus

SQL*Plus: Release 11.2.0.1.0 Production on Thu Jul 8 10:25:38 2010

Copyright (c) 1982, 2009, Oracle. All rights reserved.

Enter user-name:


looked great...So good luck guys

Wednesday, July 7, 2010

RMAN DEBUG & TRACING

Turn debug on

$ rman target / catalog user/pwd@db debug trace rman.trc log rman.log
RMAN>


Additional Tracing at the channel level

RMAN> run {
2> allocate channel ch01 type sbt trace=5;
3> send 'NB_ORA_CLIENT=hypprdbv1-bn';
4> RESTORE SPFILE to pfile '/export/home/oracle/sshaik/initHYPP.ora' from autobackup maxdays 20;
5> }

allocated channel: ch01
channel ch01: sid=34 devtype=SBT_TAPE
channel ch01: Veritas NetBackup for Oracle - Release 6.5 (2007072323)

sent command to channel: ch01

Starting restore at 07-JUL-10

channel ch01: looking for autobackup on day: 20100707

Thursday, July 1, 2010

Known RMAN Performance Problems [ID 247611.1]

Anyone who wants to improve the RMAN backup/recovery performance problems I strongly recommend to read the below Metalink ID: This is very useful and simple. "ALL known issues documented at one place"


Known RMAN Performance Problems [ID 247611.1]

gather stats on fixed database objects

Latching on large fixed objects is expensive, so without proper statistics, performance degradation is expected when sub-optimal plans happen.(as in some cases, there are instance wide effects, such as effectively serializing access to the shared pool) This is more prevalent in very large, or very busy systems, as the number of times the latch is requested, and/or the length of time the latch is held, will increase with load and volume in the X$ being queried through the views.

Fixed object statistics will need to be gathered under representative load and you may experience performance degradation while the statistics are gathering.

There are some cases where having no statistics on a fixed object may produce the best plans, but in general, better plans are achieved by gathering statistics on these tables than by not gathering statistics.

There are some fixed tables that are simply very volatile by nature and it will be extremely hard to get accurate statistics on. In general, in the case of these volatile fixed tables, better plans are achieved by gathering statistics on these tables than by not gathering statistics.

SQL> select OWNER, TABLE_NAME, LAST_ANALYZED from dba_tab_statistics where table_name='X$KGLDP';

OWNER TABLE_NAME LAST_ANAL
------------------------------ ------------------------------ ---------
SYS X$KGLDP

SQL> exec dbms_stats.gather_fixed_objects_stats();

PL/SQL procedure successfully completed.


SQL> select OWNER, TABLE_NAME, LAST_ANALYZED from dba_tab_statistics where table_name='X$KGLDP';

OWNER TABLE_NAME LAST_ANALYZED
------------------------------ ------------------------------ -------------------
SYS X$KGLDP 07/1/2010 11:09:54

Size of the local individual index partitions

If you know the index name then go to PART-2 if not use PART-1 to get the index name using table name

PART-1
First find the index names for the given table and partition names:
select DISTINCT ''''||A.INDEX_NAME||''''||',' FROM dba_ind_partitions A,
BA_PART_INDEXES B
WHERE
A.INDEX_OWNER=B.OWNER
AND A.INDEX_NAME=B.INDEX_NAME
AND A.INDEX_OWNER='ISA' AND
B.TABLE_NAME IN
('ADJUSTMENT_REASON_CODE_PRE')
AND A.PARTITION_NAME IN
('P_CHARLOTTE')

PART-2:

select SEGMENT_NAME,PARTITION_NAME,sum(bytes)/1024/1024 size_in_mb
FROM DBA_SEGMENTS
WHERE owner='USER1'
AND SEGMENT_name in ('INDEX1')
AND SEGMENT_TYPE='INDEX PARTITION'
AND PARTITION_NAME IN
('P_CHARLOTTE')
GROUP BY SEGMENT_NAME,PARTITION_NAME
ORDER BY 1,2

Restore older version db on a newer version binaries

Oracle Says yes:

1) Can I restore or duplicate my previous version database using a later version of Oracle?

For example, is it possible to restore a 9i backup while using the 10g executables?

It is possible to use the 10.2 RMAN executable to restore a 9.2 database even if the restored datafiles will be stored in ASM. RMAN is configured so that a higher release is able to restore a lower release, but it is strongly suggested you use only the same version. In this scenario for a restore (not RMAN duplicate) you will be using the restored 9.2 database against 10.2 binaries. If you intend to run the restored 9.2 database against the 10.2 binary files you need to run the upgrade steps to make the restored 9.2 database a 10.2 database, but understand that the pre-upgrade checklist which you may have submitted against the source 9.2 database does NOT apply to this restored 9.2 database.

Once the restore / recovery is completed you will need to open the database with a combination of the resetlogs option and the upgrade option. For example:

SQL> alter database open resetlogs upgrade;
After the database is opened in this fashion run the upgrade steps for the higher version. If you have any problems with any part of this process please open a service request. If you are restoring using backups created by an earlier release, some time after you have upgraded the database, then media recovery will re-play the changes that were made during the upgrade, and you do not need to run the upgrade scripts again.

Wednesday, June 30, 2010

Restore database with SPFILE and Controlfile

when SPFILE and controlfile backup location are not available or not sure then do the following to do the restore of controlfile and spfile:
but YOU SHOULD KNOW the DBID

when no init.ora or spfile available
use RMAN
>export ORACLE_SID=ABC
>rman

Recovery Manager: Release 10.2.0.1.0 - Production on Wed Jun 30 14:16:48 2010

Copyright (c) 1982, 2005, Oracle. All rights reserved.

RMAN> connect target /

connected to target database (not started)

RMAN> startup nomount

startup failed: ORA-01078: failure in processing system parameters
LRM-00109: could not open parameter file '/UNIX/u0001/oracle/product/10.2.0.1/db_1/dbs/initABC.ora'

starting Oracle instance without parameter file for retrival of spfile
Oracle instance started

Total System Global Area 159383552 bytes

Fixed Size 1977112 bytes
Variable Size 71308520 bytes
Database Buffers 79691776 bytes
Redo Buffers 6406144 bytes

now restoring the controlfile:

'set controlfile autobackup format' is only necessary if the controlfile autobackup is in a NON default location – set accordingly

RMAN> set dbid=123456789012;
run {
set controlfile autobackup format for device type disk to '\%F'; -----if disk else change type to SBT if TAPE
restore spfile from autobackup;
restore controlfile from autobackup;
startup force mount; --startup FORCE mount' is needed because the instance was originally started without an spfile
}

once DB is mounted .. list the backup files and you all set with restore and recover commands
RMAN> list backup summary;

Friday, June 25, 2010

Trace a SID

SQL> EXEC DBMS_MONITOR.session_trace_enable(session_id=>918,serial_num=>32481);
SQL> EXEC DBMS_MONITOR.session_trace_enable(session_id =>918,serial_num=>32481, waits=>TRUE, binds=>FALSE);
SQL> EXEC DBMS_MONITOR.session_trace_disable(session_id=>918,serial_num=>32481);

Increase INI_TRANS for heavily logging tables

if the tables are transaction intensive like used for logging then increase ini_trans from default upto 4 or as you deem correct.

verify ini_trans using the below query
select table_name,ini_trans,freelists,freelist_groups from dba_tables
where owner='SIEBEL' AND TABLE_NAME IN
(
'S_DOCK_TXN_LOG')

Monitor Table Usage

I used the below query to get the total physical reads/write by day:

select distinct
to_char(begin_interval_time,'mm/dd') c1,
sum(physical_reads_total) reads,
sum(physical_writes_total) writes
from
dba_hist_seg_stat s,
dba_hist_seg_stat_obj o
,dba_hist_snapshot sn
where
o.owner = 'SIEBEL'
and
s.obj# = o.obj#
and
sn.snap_id = s.snap_id
and
object_name = 'S_DOCK_TXN_LOG'
group by to_char(begin_interval_time,'mm/dd')
order by 1;

Wednesday, June 16, 2010

Change DB_NAME and DBID

>sqlplus / as sysdba

SQL*Plus: Release 10.2.0.4.0 - Production on Wed Jun 16 23:36:30 2010

Copyright (c) 1982, 2007, Oracle. All Rights Reserved.

Connected to an idle instance.

SQL> startup pfile='/evm05/u0001/oracle/product/10.2.0/db_1/dbs/initNEWDB.ora' mount;
ORACLE instance started.

Total System Global Area 2013265920 bytes
Fixed Size 2041720 bytes
Variable Size 922753160 bytes
Database Buffers 1073741824 bytes
Redo Buffers 14729216 bytes
Database mounted.
SQL> exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
HOSTNAME(NEWDB) /export/home/oracle
>nid target=sys/dba2hire@NEWDB dbname=NEWDB

DBNEWID: Release 10.2.0.4.0 - Production on Wed Jun 16 23:37:05 2010

Copyright (c) 1982, 2007, Oracle. All rights reserved.

Connected to database OLDDB (DBID=1847680250)

Connected to server version 10.2.0

Control Files in database:
+EVM03_REDO_01/NEWDB/controlfile/current.262.698751897
+EVM03_REDO_02/NEWDB/controlfile/current.257.698751897

Change database ID and database name OLDDB to NEWDB? (Y/[N]) => Y
Proceeding with operation
Changing database ID from 1847680250 to 2196540583
Changing database name from OLDDB to NEWDB
Control File +EVM03_REDO_01/NEWDB/controlfile/current.262.698751897 - modified
Control File +EVM03_REDO_02/NEWDB/controlfile/current.257.698751897 - modified
Datafile +EVM03_DATA_01/NEWDB/datafile/system.264.698752361 - dbid changed, wrote new name
Datafile +EVM03_DATA_01/NEWDB/datafile/undo.310.698757225 - dbid changed, wrote new name
Datafile +EVM03_DATA_01/NEWDB/datafile/sysaux.266.698752335 - dbid changed, wrote new name
Datafile +EVM03_DATA_01/NEWDB/datafile/evm_data.282.698754487 - dbid changed, wrote new name
Datafile +EVM03_DATA_01/NEWDB/datafile/evm_index.294.698755755 - dbid changed, wrote new name
Datafile +EVM03_DATA_01/NEWDB/datafile/tools.290.698754583 - dbid changed, wrote new name
Datafile +EVM03_DATA_01/NEWDB/datafile/users.314.698757431 - dbid changed, wrote new name
Datafile +EVM03_DATA_01/NEWDB/datafile/cbar_default_data.292.698755751 - dbid changed, wrote new name
Datafile +EVM03_DATA_01/NEWDB/datafile/cbar_default_idx.303.698756719 - dbid changed, wrote new name
Datafile +EVM03_DATA_01/NEWDB/datafile/evm_data.283.698754489 - dbid changed, wrote new name
Datafile +EVM03_DATA_01/NEWDB/datafile/evm_index.302.698756717 - dbid changed, wrote new name
Datafile +EVM03_DATA_01/NEWDB/datafile/evm_data.272.698751923 - dbid changed, wrote new name
Datafile +EVM03_DATA_01/NEWDB/datafile/evm_index.280.698754481 - dbid changed, wrote new name
Datafile +EVM03_DATA_01/NEWDB/datafile/large_data.300.698756275 - dbid changed, wrote new name
Datafile +EVM03_DATA_01/NEWDB/datafile/large_data.298.698755761 - dbid changed, wrote new name
Datafile +EVM03_DATA_01/NEWDB/datafile/large_data.313.698757427 - dbid changed, wrote new name
Datafile +EVM03_DATA_01/NEWDB/datafile/large_data.312.698757409 - dbid changed, wrote new name
Datafile +EVM03_DATA_01/NEWDB/datafile/cbar_default_data.285.698754493 - dbid changed, wrote new name
Datafile +EVM03_DATA_01/NEWDB/datafile/evm_data.304.698756721 - dbid changed, wrote new name
Datafile +EVM03_DATA_01/NEWDB/datafile/evm_data.275.698753195 - dbid changed, wrote new name
Datafile +EVM03_DATA_01/NEWDB/datafile/cbar_default_data.297.698755759 - dbid changed, wrote new name
Datafile +EVM03_DATA_01/NEWDB/datafile/evm_data.311.698757279 - dbid changed, wrote new name
Datafile +EVM03_DATA_01/NEWDB/datafile/cbar_default_idx.270.698751929 - dbid changed, wrote new name
Datafile +EVM03_DATA_01/NEWDB/datafile/evm_data.276.698753197 - dbid changed, wrote new name
Datafile +EVM03_DATA_01/NEWDB/datafile/evm_data.305.698756723 - dbid changed, wrote new name
Datafile +EVM03_DATA_01/NEWDB/datafile/evm_data.271.698751927 - dbid changed, wrote new name
Datafile +EVM03_DATA_01/NEWDB/datafile/evm_index.293.698755753 - dbid changed, wrote new name
Datafile +EVM03_DATA_01/NEWDB/datafile/evm_data.307.698756725 - dbid changed, wrote new name
Datafile +EVM03_DATA_01/NEWDB/datafile/evm_index.287.698754495 - dbid changed, wrote new name
Datafile +EVM03_DATA_01/NEWDB/datafile/cbar_default_data.274.698753193 - dbid changed, wrote new name
Datafile +EVM03_DATA_01/NEWDB/datafile/evm_data.268.698751931 - dbid changed, wrote new name
Datafile +EVM03_DATA_01/NEWDB/datafile/evm_data.299.698756141 - dbid changed, wrote new name
Datafile +EVM03_DATA_01/NEWDB/datafile/cbar_default_idx.278.698753199 - dbid changed, wrote new name
Datafile +EVM03_DATA_01/NEWDB/datafile/evm_data.260.698753191 - dbid changed, wrote new name
Datafile +EVM03_DATA_01/NEWDB/datafile/evm_index.273.698751921 - dbid changed, wrote new name
Datafile +EVM03_DATA_01/NEWDB/datafile/cbar_default_data.291.698755747 - dbid changed, wrote new name
Datafile +EVM03_DATA_01/NEWDB/datafile/large_data.279.698753407 - dbid changed, wrote new name
Datafile +EVM03_DATA_01/NEWDB/datafile/cbar_default_idx.306.698756723 - dbid changed, wrote new name
Datafile +EVM03_DATA_01/NEWDB/datafile/undo.267.698752293 - dbid changed, wrote new name
Datafile +EVM03_DATA_01/NEWDB/datafile/evm_index.277.698753197 - dbid changed, wrote new name
Datafile +EVM03_DATA_01/NEWDB/datafile/cbar_default_data.269.698751929 - dbid changed, wrote new name
Datafile +EVM03_DATA_01/NEWDB/datafile/cbar_default_data.309.698757203 - dbid changed, wrote new name
Datafile +EVM03_DATA_01/NEWDB/datafile/cbar_default_data.281.698754483 - dbid changed, wrote new name
Datafile +EVM03_DATA_01/NEWDB/datafile/evm_data.261.698753187 - dbid changed, wrote new name
Datafile +EVM03_DATA_01/NEWDB/datafile/cbar_default_idx.286.698754495 - dbid changed, wrote new name
Datafile +EVM03_DATA_01/NEWDB/datafile/evm_index.262.698753183 - dbid changed, wrote new name
Datafile +EVM03_DATA_01/NEWDB/datafile/mydvr_data.288.698754569 - dbid changed, wrote new name
Datafile +EVM03_DATA_01/NEWDB/datafile/evm_data.265.698751915 - dbid changed, wrote new name
Datafile +EVM03_DATA_01/NEWDB/datafile/evm_index.301.698756713 - dbid changed, wrote new name
Datafile +EVM03_DATA_01/NEWDB/datafile/undo.296.698755757 - dbid changed, wrote new name
Datafile +EVM03_DATA_01/NEWDB/datafile/evm_index.284.698754491 - dbid changed, wrote new name
Datafile +EVM03_DATA_01/NEWDB/datafile/evm_index.308.698756725 - dbid changed, wrote new name
Datafile +EVM03_DATA_01/NEWDB/datafile/large_data.289.698754577 - dbid changed, wrote new name
Datafile +EVM03_DATA_01/NEWDB/datafile/evm_data.263.698751919 - dbid changed, wrote new name
Datafile +EVM03_DATA_01/NEWDB/datafile/evm_index.295.698755757 - dbid changed, wrote new name
Datafile +EVM03_DATA_01/NEWDB/datafile/evm_data.317.719316783 - dbid changed, wrote new name
Datafile +EVM03_DATA_01/NEWDB/datafile/evm_data.318.719316813 - dbid changed, wrote new name
Datafile +EVM03_DATA_01/NEWDB/tempfile/temp.316.699704527 - dbid changed, wrote new name
Datafile +EVM03_DATA_01/NEWDB/tempfile/temp.315.699704525 - dbid changed, wrote new name
Control File +EVM03_REDO_01/NEWDB/controlfile/current.262.698751897 - dbid changed, wrote new name
Control File +EVM03_REDO_02/NEWDB/controlfile/current.257.698751897 - dbid changed, wrote new name
Instance shut down

Database name changed to NEWDB.
Modify parameter file and generate a new password file before restarting.
Database ID for database NEWDB changed to 2196540583.
All previous backups and archived redo logs for this database are unusable.
Database is not aware of previous backups and archived logs in Recovery Area.
Database has been shutdown, open database with RESETLOGS option.
Succesfully changed database name and ID.
DBNEWID - Completed succesfully.

HOSTNAME(NEWDB) /export/home/oracle
>sqlplus / as sysdba

SQL*Plus: Release 10.2.0.4.0 - Production on Wed Jun 16 23:40:22 2010

Copyright (c) 1982, 2007, Oracle. All Rights Reserved.

Connected to an idle instance.

SQL> startup pfile='/evm05/u0001/oracle/product/10.2.0/db_1/dbs/initNEWDB.ora' mount;
ORACLE instance started.

Total System Global Area 2013265920 bytes
Fixed Size 2041720 bytes
Variable Size 922753160 bytes
Database Buffers 1073741824 bytes
Redo Buffers 14729216 bytes
Database mounted.
SQL> show parameter db_name

NAME TYPE
------------------------------------ --------------------------------
VALUE
------------------------------
db_name string
NEWDB
SQL> alter database open resetlogs;

Database altered.

SQL> create spfile from pfile;

File created.

SQL> shutdown immediate;

Monday, May 17, 2010

Remove ^M character from UNIX

To remove ^M characters in UNIX

To get the ^M in unix ( press control+V control+m ) i.e ctrlv & ctrlm on your keyboard(no spaces in between)

do anyone of the following:

dos2unix filename ( not all unix flavors have dos2unix by default)

or
sed 's/^M//g' ${INPUT_FILE} > tmp.txt
mv tmp.txt ${INPUT_FILE}

or
tr -d "\15" < ${INPUT_FILE} > tmp.txt;
mv tmp.txt ${INPUT_FILE}

or

perl -p -i -e 's/^M//g' ${INPUT_FILE}
or

multiple files in a directory

This will remove all the ^M from each file.
perl -p -i -e 's/^M//g' *

Sunday, April 18, 2010

RMAN EXPIRED VS OBSOLETE

EXPIRED:
If you run CROSSCHECK, and if RMAN cannot locate the files, then it updates their
records in the RMAN repository to EXPIRED status. You can then use the DELETE
EXPIRED command to remove records of expired backups and copies from the RMAN
repository.

OBSOLETE:
backups that are no longer needed to satisfy specified recoverability requirements. You can delete files obsolete according to the configured default retention policy, or another retention policy that you specify

Fast Recovery Area usage

SELECT * FROM V$RECOVERY_AREA_USAGE;
FILE_TYPE PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES
------------ ------------------ ------------------------- ---------------
CONTROLFILE 0 0 0
ONLINELOG 2 0 22
ARCHIVELOG 4.05 2.01 31
BACKUPPIECE 3.94 3.86 8
IMAGECOPY 15.64 10.43 66
FLASHBACKLOG .08 0 1

Backup Size

Query the V$RMAN_BACKUP_JOB_DETAILS view for the size of the backups in an
RMAN session.
If you run BACKUP DATABASE, then
V$RMAN_BACKUP_JOB_DETAILS.OUTPUT_BYTES shows the total size of backup
sets written by the backup job for the database that you are backing up. To view
backup set sizes for all registered databases, query
RC_RMAN_BACKUP_JOB_DETAILS.
The following query shows the backup job speed ordered by session key, which is
the primary key for the RMAN session. The columns in_sec and out_sec
display the data input and output per second.
COL in_size FORMAT a10
COL out_size FORMAT a10
SELECT SESSION_KEY,
INPUT_TYPE,
COMPRESSION_RATIO,
INPUT_BYTES_DISPLAY in_size,
OUTPUT_BYTES_DISPLAY out_size
FROM V$RMAN_BACKUP_JOB_DETAILS
ORDER BY SESSION_KEY;
The following sample output shows the speed of the backup jobs:
SESSION_KEY INPUT_TYPE COMPRESSION_RATIO IN_SIZE OUT_SIZE
----------- ------------- ----------------- ---------- ----------
10 DATAFILE FULL 1 602 50M 602.58M
17 DB FULL 1.137 634.80M 558.13M

Backup Job Speed

Query the V$RMAN_BACKUP_JOB_DETAILS view for the rate of backup jobs in an
RMAN session.

The following query shows the backup job speed ordered by session key, which is
the primary key for the RMAN session. The columns in_sec and out_sec
display the data input and output per second.
COL in_sec FORMAT a10
COL out_sec FORMAT a10
COL TIME_TAKEN_DISPLAY FORMAT a10
SELECT SESSION_KEY,
OPTIMIZED,
COMPRESSION_RATIO,
INPUT_BYTES_PER_SEC_DISPLAY in_sec,
OUTPUT_BYTES_PER_SEC_DISPLAY out_sec,
TIME_TAKEN_DISPLAY
FROM V$RMAN_BACKUP_JOB_DETAILS
ORDER BY SESSION_KEY;
The following sample output shows the speed of the backup jobs:
SESSION_KEY OPT COMPRESSION_RATIO IN_SEC OUT_SEC TIME_TAKEN
----------- --- ----------------- ---------- ---------- ----------
9 NO 1 8.24M 8.24M 00:01:14
16 NO 1.3273 6.77M 5.10M 00:01:45
113 NO 1 2.99M 2.99M 00:00:44

Backup Job History

To query details about past and current RMAN jobs:
1. Connect SQL*Plus to the database whose backup history you intend to query.
2. Query the V$RMAN_BACKUP_JOB_DETAILS view for information about the
backup type, status, and start and end time.
The following query shows the backup job history ordered by session key, which
is the primary key for the RMAN session:
COL STATUS FORMAT a9
COL hrs FORMAT 999.99
SELECT SESSION_KEY, INPUT_TYPE, STATUS,
TO_CHAR(START_TIME,'mm/dd/yy hh24:mi') start_time,
TO_CHAR(END_TIME,'mm/dd/yy hh24:mi') end_time,
ELAPSED_SECONDS/3600 hrs
FROM V$RMAN_BACKUP_JOB_DETAILS
ORDER BY SESSION_KEY;
The following sample output shows the backup job history:
SESSION_KEY INPUT_TYPE STATUS START_TIME END_TIME HRS
----------- ------------- --------- -------------- -------------- -------
9 DATAFILE FULL COMPLETED 04/18/07 18:14 04/18/07 18:15 .02
16 DB FULL COMPLETED 04/18/07 18:20 04/18/07 18:22 .03
113 ARCHIVELOG COMPLETED 04/23/07 16:04 04/23/07 16:05 .01

Thursday, April 15, 2010

RMAN-Block Change TRacking

Using Block Change Tracking to Improve Incremental Backup Performance

The block change tracking feature for incremental backups improves backup
performance by recording changed blocks for each datafile.
About Block Change Tracking
If block change tracking is enabled on a primary or standby database, then RMAN
uses a block change tracking file to identify changed blocks for incremental backups.
By reading this small bitmap file to determine which blocks changed, RMAN avoids
having to scan every block in the datafile that it is backing up.
Block change tracking is disabled by default. Nevertheless, the benefits of avoiding full datafile scans during backup are considerable, especially if only a small percentage of data blocks are changed between backups. If your backup strategy involves incremental backups, then block change tracking is recommended. Block change
tracking in no way changes the commands used to perform incremental backups.
The change tracking file requires no maintenance after initial configuration.
You can only enable block change tracking at a physical standby database if a license
for the Oracle Active Data Guard option is enabled.
Space Management in the Block Change Tracking File The change tracking file maintains
bitmaps that mark changes in the datafiles between backups. The database performs a
bitmap switch before each backup. Oracle Database automatically manages space in
the change tracking file to retain block change data that covers the 8 most recent
backups. After the maximum of 8 bitmaps is reached, the oldest bitmap is overwritten
by the bitmap that tracks the current changes.
The first level 0 incremental backup scans the entire datafile. Subsequent incremental backups use the block change tracking file to scan only the blocks that have been marked as changed since the last backup. An incremental backup can be optimized only when it is based on a parent backup that was made after the start of the oldest bitmap in the block change tracking file.

Consider the 8-bitmap limit when developing your incremental backup strategy. For
example, if you make a level 0 database backup followed by 7 differential incremental
backups, then the block change tracking file now includes 8 bitmaps. If you then make
a cumulative level 1 incremental backup, then RMAN cannot optimize the backup
because the bitmap corresponding to the parent level 0 backup is overwritten with the
bitmap that tracks the current changes.
Location of the Block Change Tracking File One block change tracking file is created for the whole database. By default, the change tracking file is created as an Oracle managed file in the destination specified by the DB_CREATE_FILE_DEST initialization
parameter. You can also place the change tracking file in any location you choose, by
specifying its name when enabling block change tracking. Oracle recommends against
using a raw device (that is, a disk without a file system) as a change tracking file.

RMAN does not support backup and recovery of the change tracking file. The
database resets the change tracking file when it determines that the change tracking
file is invalid. If you restore and recover the whole database or a subset, then the
database resets the block change tracking file and starts tracking changes again. After you make a level 0 incremental backup, the next incremental backup is able to use change tracking data.

Size of the Block Change Tracking File The size of the block change tracking file is
proportional to the size of the database and the number of enabled threads of redo.
The size of the block change tracking file can increase and decrease as the database
changes. The size is not related to the frequency of updates to the database.
Typically, the space required for block change tracking for a single instance is
approximately 1/30,000 the size of the data blocks to be tracked. For an Oracle RAC
environment, it is 1/30,000 of the size of the database, times the number of enabled
threads.

The following factors that may cause the file to be larger than this estimate suggests:
To avoid the overhead of allocating space as your database grows, the block
change tracking file size starts at 10 MB. New space is allocated in 10 MB
increments. Thus, for any database up to approximately 300 GB, the file size is no
smaller than 10 MB, for up to approximately 600 GB the file size is no smaller than
20 MB, and so on.
For each datafile, a minimum of 320 KB of space is allocated in the block change
tracking file, regardless of the size of the datafile. Thus, if you have a large number of relatively small datafiles, the change tracking file is larger than for databases with a smaller number of larger datafiles containing the same data.
Enabling and Disabling Block Change Tracking

You can enable block change tracking when the database is either open or mounted.
This section assumes that you intend to create the block change tracking file as an
Oracle Managed File in the database area, which is where the database maintains
active database files such as datafiles, control files, and online redo log files.

To enable block change tracking:
1. Start SQL*Plus and connect to a target database with administrator privileges.
2. Make sure that the DB_CREATE_FILE_DEST initialization parameter is set.
SHOW PARAMETER DB_CREATE_FILE_DEST
If the parameter is not set, and if the database is open, then you can set the
parameter with the following form of the ALTER SYSTEM statement:
ALTER SYSTEM SET DB_CREATE_FILE_DEST = '/disk1/bct/' SCOPE=BOTH SID='*';
3. Enable block change tracking.
Execute the following ALTER DATABASE statement:
ALTER DATABASE ENABLE BLOCK CHANGE TRACKING;

You can also create the change tracking file in a location you choose yourself by
using the following form of SQL statement:
ALTER DATABASE ENABLE BLOCK CHANGE TRACKING
USING FILE '/mydir/rman_change_track.f' REUSE;

The REUSE option tells Oracle Database to overwrite any existing block change
tracking file with the specified name.
Disabling Block Change Tracking
This section assumes that the block change tracking feature is currently enabled. When you disable block change tracking, the database removes the block change tracking file from the operating system.

To disable block change tracking:
1. Start SQL*Plus and connect to a target database with administrator privileges.
2. Ensure that the target database is mounted or open.
3. Disable block change tracking.
Execute the following ALTER DATABASE statement:
ALTER DATABASE DISABLE BLOCK CHANGE TRACKING;
Checking Whether Change Tracking is Enabled
You can query the V$BLOCK_CHANGE_TRACKING view to determine whether change
tracking is enabled, and if it is, the filename of the block change tracking file.
To determine whether change tracking is enabled:

Enter the following query in SQL*Plus (sample output included):
COL STATUS FORMAT A8
COL FILENAME FORMAT A60
SELECT STATUS, FILENAME
FROM V$BLOCK_CHANGE_TRACKING;
STATUS FILENAME
-------- ------------------------------------------------------------
ENABLED /disk1/bct/RDBMS/changetracking/o1_mf_2f71np5j_.chg

Changing the Location of the Block Change Tracking File
To move the change tracking file, use the ALTER DATABASE RENAME FILE
statement. The database must be mounted. The statement updates the control file to
refer to the new location and preserves the contents of the change tracking file. If you cannot shut down the database, then you can disable and enable block change
tracking.
In this case, you lose the contents of the existing block change tracking file.
To change the location of the change tracking file:
1. Start SQL*Plus and connect to a target database.
2. If necessary, determine the current name of the change tracking file:
SQL> SELECT FILENAME FROM V$BLOCK_CHANGE_TRACKING;
3. If possible, shut down the database.
For example:

SQL> SHUTDOWN IMMEDIATE
If you shut down the database, then skip to the next step. If you choose not to shut
down the database, then execute the following SQL statements and skip all
remaining steps:
SQL> ALTER DATABASE DISABLE BLOCK CHANGE TRACKING;
SQL> ALTER DATABASE ENABLE BLOCK CHANGE TRACKING USING FILE 'new_location';
In this case you lose the contents of the block change tracking file. Until the next
time you complete a level 0 incremental backup, RMAN must scan the entire file.
4. Using host operating system commands, move the change tracking file to its new
location.
5. Mount the database and move the change tracking file to a location that has more
space. For example:
ALTER DATABASE RENAME FILE
'/disk1/bct/RDBMS/changetracking/o1_mf_2f71np5j_.chg' TO
'/disk2/bct/RDBMS/changetracking/o1_mf_2f71np5j_.chg';
This statement changes the location of the change tracking file while preserving its
contents.
6. Open the database:
SQL> ALTER DATABASE OPEN;

How frequent shall I backup

When deciding how often to take level 0 backups, a good rule of thumb is to take a
new level 0 backup whenever 20% or more of the data has changed. If the rate of
change to your database is predictable, then you can observe the size of your
incremental backups to determine when a new level 0 is appropriate. The following
SQL query determines the number of blocks written to an incremental level 1 backup
of each datafile with at least 20% of its blocks backed up:
SELECT FILE#, INCREMENTAL_LEVEL, COMPLETION_TIME,
BLOCKS, DATAFILE_BLOCKS
FROM V$BACKUP_DATAFILE
WHERE INCREMENTAL_LEVEL > 0
AND BLOCKS / DATAFILE_BLOCKS > .2
ORDER BY COMPLETION_TIME;

Find who did it using Logminer

Step 1

ALTER SESSION SET NLS_DATE_FORMAT = 'MM/DD/YYYY HH24:MI:SS';

EXECUTE SYS.DBMS_LOGMNR.START_LOGMNR(STARTTIME => '04/13/2010 10:25:00', -
ENDTIME => '04/13/2010 10:45:00', -
OPTIONS => SYS.DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + -
SYS.DBMS_LOGMNR.CONTINUOUS_MINE + -
SYS.DBMS_LOGMNR.NO_SQL_DELIMITER);


Step 2

SQL> SET LINESIZE 132
SQL> SET PAGES 1000
SQL> COL OPERATION FORMAT A10
SQL> COL OBJ_NAME FORMAT A40
SQL> COL SQL_REDO FORMAT A60 WRAP
SQL> COL USERNAME FORMAT A15
SQL> COL SESSION_INFO FORMAT A40 WRAP
SQL>
SQL> SELECT OPERATION, SUBSTR(SEG_OWNER || '.' || SEG_NAME, 1, 40) OBJ_NAME, TIMESTAMP, USERNAME, SESSION#, SERIAL#, SQL_REDO, SESSION_INFO
2 FROM V$LOGMNR_CONTENTS
3 WHERE SEG_OWNER = 'ISA' AND SEG_NAME = 'I2_DISPO_ACTIVITY'
4 AND (OPERATION = 'DDL');

OPERATION OBJ_NAME TIMESTAMP USERNAME SESSION# SERIAL#
---------- ---------------------------------------- ------------------- --------------- ---------- ----------
SQL_REDO SESSION_INFO
------------------------------------------------------------ ----------------------------------------
DDL ISA.I2_DISPO_ACTIVITY 04/13/2010 10:43:47 UNKNOWN 0 0
DROP TABLE EMP


Step 3

EXECUTE DBMS_LOGMNR.END_LOGMNR;


To get the info like username and session info you should enable supplemental logging as below.

Verify supplemental logging is turned on
SQL>SELECT NAME,SUPPLEMENTAL_LOG_DATA_MIN,SUPPLEMENTAL_LOG_DATA_PK,SUPPLEMENTAL_LOG_DATA_UI,SUPPLEMENTAL_LOG_DATA_FK,SUPPLEMENTAL_LOG_DATA_ALL FROM V$DATABASE;

NAME SUPPLEME SUP SUP SUP SUP
--------- -------- --- --- --- ---
CM3P NO NO NO NO NO

Next, you must enable supplemental logging. Supplemental logging places additional column data into the redo log file whenever an UPDATE operation is performed. At the least, minimal database-level supplemental logging must be enabled for any Change Data Capture source database:

SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;

more at :

http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/logminer.htm

Friday, April 9, 2010

Full Backup Vs Incremental

The only difference between a level 0 incremental backup and a full backup is that a
full backup is never included in an incremental strategy. Thus, an incremental level 0 is
a full backup that happens to be the parent of incremental backups whose level is
greater than 0.
A level 1 incremental backup can be either of the following types:
■ A differential incremental backup, which backs up all blocks changed after the
most recent incremental backup at level 1 or 0
■ A cumulative incremental backup, which backs up all blocks changed after the
most recent incremental backup at level 0
Incremental backups are differential by default.

Note: Cumulative backups are preferable to differential backups
when recovery time is more important than disk space, because
fewer incremental backups need to be applied during recovery.

RMAN Backup multiplexing

Multiplexed Backup Sets
When creating backup sets, RMAN can simultaneously read multiple files from disk
and then write their blocks into the same backup set. For example, RMAN can read
from two datafiles simultaneously, and then combine the blocks from these datafiles
into a single backup piece. The combination of blocks from multiple files is called
backup multiplexing. Image copies, by contrast, are never multiplexed.

RMAN multiplexing is determined by several factors. For example, the FILESPERSET
parameter of the BACKUP command determines how many datafiles to put in each
backup set. The MAXOPENFILES parameter of ALLOCATE CHANNEL or CONFIGURE
CHANNEL defines how many datafiles RMAN can read from simultaneously. The basic
multiplexing algorithm is as follows:
■ Number of files in each backup set
This number is the minimum of the FILESPERSET setting and the number of files
read by each channel. The FILESPERSET default is 64.
■ The level of multiplexing
This is the number of input files simultaneously read and then written into the
same backup piece. The level of multiplexing is the minimum of MAXOPENFILES
and the number of files in each backup set. The MAXOPENFILES default is 8

RMAN backup compression

Block Compression for Backup Sets
RMAN uses two types of block compression when creating backup sets:
1. Unused Block Compression (Supports disk backup and OSB tape backup)
2. Null Block Compression (Supports all backups)

Unused Block Compression
During unused block compression, RMAN does not check each block. Instead,
RMAN reads the bitmaps that indicate what blocks are currently allocated and then
only reads the blocks that are currently allocated.
Unused block compression is turned on automatically when all of the following five
conditions are true:
1. The COMPATIBLE initialization parameter is set to 10.2 or higher.
2. There are currently no guaranteed restore points defined for the database.
3. The datafile is locally managed.
4. The datafile is being backed up to a backup set as part of a full backup or a level 0 incremental backup.
5. The backup set is created on disk or Oracle Secure Backup is the media manager.

Null Block Compression
During null block compression, RMAN checks every block to see if it has ever
contained data. Blocks that have never contained data are not backed up. Blocks that
have contained data, either currently or at some point in time in the past, are backed up.

Binary Compression for Backup Sets
RMAN supports binary compression of backup sets. Backup compression is only
enabled when you specify AS COMPRESSED BACKUPSET in BACKUP command, or
one-time with: CONFIGURE DEVICE TYPE BACKUP TYPE TO
COMPRESSED BACKUPSET.
You have two binary compression options:

You can use the BASIC compression algorithm which does not require the
Advanced Compression Option. This setting offers a compression ratio
comparable to MEDIUM, at the expense of additional CPU consumption.

current sid and session info

select * from v$session where sid=(select distinct sid from v$mystat);

Thursday, April 8, 2010

Exclude tablespaces in Backup

For example, you can exclude testing tablespaces cwmlite and example from whole
database backups as follows:
CONFIGURE EXCLUDE FOR TABLESPACE DATA_01;
CONFIGURE EXCLUDE FOR TABLESPACE INDEX_01;


If you run the following command, then RMAN backs up all tablespaces in the
database except DATA_01 and DATA_02:
BACKUP DATABASE;

11g Restore points

Normal vs Guaranteed restore points:

Guaranteed Restore Points
Like a normal restore point, a guaranteed restore point serves as an alias for an SCN
in recovery operations. A principal difference is that guaranteed restore points never
age out of the control file and must be explicitly dropped. In general, you can use a
guaranteed restore point as an alias for an SCN with any command that works with a
normal restore point.

If the recovery area has enough disk space to store the needed logs, then you can use a
guaranteed restore point to rewind a whole database to a known good state days or
weeks ago. As with Flashback Database, even the effects of NOLOGGING operations
like direct load inserts can be reversed with guaranteed restore points.

Limitations that apply to Flashback Database also apply to
guaranteed restore points


To create a restore point:
1. Connect SQL*Plus to a target database.
2. Ensure that the database is open or mounted. If the database is mounted, then it
must have been shut down cleanly (unless it is a physical standby database).
3. Run the CREATE RESTORE POINT statement.
The following example shows how to create a normal restore point in SQL*Plus:
SQL> CREATE RESTORE POINT before_upgrade;
This example shows how to create a guaranteed restore point:
SQL> CREATE RESTORE POINT before_upgrade GUARANTEE FLASHBACK DATABASE;

FLASH BACK DATABASE-CONS

Limitations of Flashback Database Because Flashback Database works by undoing
changes to the datafiles that exist at the moment that you run the command, it has the
following limitations:
■ Flashback Database can only undo changes to a datafile made by an Oracle
database. It cannot be used to repair media failures, or to recover from accidental
deletion of datafiles.
■ You cannot use Flashback Database to undo a shrink datafile operation. However,
you can take the shrink file offline, flashback the rest of the database, and then
later restore and recover the shrink datafile
■ You can not use Flashback Database alone to get back a dropped datafile. If you
flashback a database to a time when a dropped datafile existed in the database,
Oracle will only add the datafile entry in the controlfile. You can recover the
dropped datafile by further restoring and recovering the datafile.
■ If the database control file is restored from backup or re-created, all accumulated
flashback log information is discarded. You cannot use FLASHBACK DATABASE
to return to a point in time before the restore or re-creation of a control file
■ When using Flashback Database with a target time at which a NOLOGGING
operation was in progress, block corruption is likely in the database objects and
datafiles affected by the NOLOGGING operation. For example, if you perform a
direct-path INSERT operation in NOLOGGING mode, and that operation runs from
9:00 to 9:15 on April 3, 2005, and you later need to use Flashback Database to
return to the target time 09:07 on that date, the objects and datafiles updated by
the direct-path INSERT may be left with block corruption after the Flashback
Database operation completes.
If possible, avoid using Flashback Database with a target time or SCN that coincides
with a NOLOGGING operation. Also, perform a full or incremental backup of the
affected datafiles immediately after any NOLOGGING operation to ensure
recoverability to points in time after the operation. If you expect to use Flashback
Database to return to a point in time during an operation such as a direct-path
INSERT, consider performing the operation in LOGGING mode.


if "flashback buf free by RVWR" is the top wait event, then you know
that Oracle Database cannot write flashback logs very quickly. In such a case, you may want to tune the file system and storage used by the fast recovery area


The V$FLASHBACK_DATABASE_STAT view shows the bytes of flashback data logged
by the database. Each row in the view shows the statistics accumulated (typically over
the course of an hour). The FLASHBACK_DATA and REDO_DATA columns describe
bytes of flashback data and redo data written respectively during the time interval,
while the DB_DATA column describe bytes of data blocks read and written. The
columns FLASHBACK_DATA and REDO_DATA correspond to sequential writes, while
DB_DATA corresponds to random reads and writes.
Because of the difference between sequential I/O and random I/O, a better indication
of I/O overhead is the number of I/O operations issued for flashback logs

V$SYSSTAT Statistics
Column Name Column Meaning
Physical write I/O request The number of write operations issued for writing data blocks
physical read I/O request The number of read operations issued for reading data blocks
redo writes The number of write operations issued for writing to the redo
flashback log writes The number of write operations issued for writing to flashback
logs.
log.

DELETE INPUT VS DELETE ALL INPUT

If you specify DELETE INPUT (without ALL), then RMAN deletes only the copy that it backs up.

If you specify ALL, then RMAN deletes all copies of the specified logs that it finds in the V$ARCHIVED_LOG view.

Ex:
The BACKUP ... DELETE INPUT command can delete archived redo logs, datafile copies, and backup sets after backing them up. This functionality is especially useful when backing up archived logs on disk to tape. RMAN backs up one and only one copy of each log sequence number, and then deletes the file that it backs up. For example, assume that you issue:

BACKUP ARCHIVELOG ALL DELETE INPUT;

In this command, RMAN backs up one copy of each log for each available sequence number, and then deletes only the file that it actually backs up.

If you specify the DELETE ALL INPUT option, then RMAN deletes whichever files match the criteria that you specify, even if there are several files of the same log sequence number. For example, assume that you archive to 3 different directories. Then, you issue this command:

BACKUP ARCHIVELOG ALL FROM SEQUENCE 1200 DELETE ALL INPUT;

In this case, RMAN backs up only one copy of each log sequence between 1200 and the most recent sequence, but deletes all logs with these sequence numbers contained in the three archive destinations.

The archived log failover feature means that RMAN searches every enabled archiving destination for good copies of a log sequence number. For example, assume that /log1 and /log2 are the only enabled archiving destinations, and that they contain the same sequence number. You run this command:

BACKUP ARCHIVELOG FROM SEQUENCE 123 DELETE ALL INPUT;

RMAN can start reading from any enabled archiving directory. For example, assume RMAN starts in directory /log1 and finds log_123.f there. Then, if RMAN discovers that log_124.f is corrupt, it searches in /log2 for a good copy of this log. Because DELETE ALL INPUT is specified, RMAN deletes all copies of logs on disk of sequence 123 and higher.

Wednesday, April 7, 2010

how many channels should we allocate?

Determining Channel Parallelism to Match Hardware Devices

RMAN can perform the I/O required for many commands in parallel, to make optimal use of your hardware resources. To perform I/O in parallel, however, the I/O must be associated with a single RMAN command, not a series of commands. For example, if backing up three datafiles, issue the command

BACKUP DATAFILE 5,6,7;

rather than issuing the commands

BACKUP DATAFILE 5;
BACKUP DATAFILE 6;
BACKUP DATAFILE 7;

When all three datafiles are backed up in one command, RMAN recognizes the opportunity for parallelism and can use multiple channels to do the I/O in parallel. When three separate commands are used, RMAN can only perform the backups one at a time, regardless of available channels and I/O devices.

The number of channels available (whether allocated in a RUN block or configured in advance) for use with a device at the moment that you run a command determines whether RMAN will read from or write to that device in parallel while carrying out the command. Failing to allocate the right number of channels adversely affects RMAN performance during I/O operations.

As a rule, the number of channels used in carrying out an individual RMAN command should match the number of physical devices accessed in carrying out that command. If manually allocating channels for a command, allocate one for each device; if configuring automatic channels, configure the PARALLELISM setting appropriately.

When backing up to tape, you should allocate one channel for each tape drive. When backing up to disk, allocate one channel for each physical disk, unless you can optimize the backup for your disk topography by using multiple disk channels. Each manually allocated channel uses a separate connection to the target or auxiliary database.

The following script creates three backups sequentially: three separate BACKUP commands are used to back up one file each. Only one channel is active at any one time because only one file is being backed up in each command.

Tuesday, April 6, 2010

How much FAST RECOVERY AREA?

use the following formula to estimate the disk quota, where n is the interval in days
between incremental updates and y is the delay in applying the foreign archived redo
logs on a logical standby database:

Disk Quota =
Size of a copy of database +
Size of an incremental backup +
Size of (n+1) days of archived redo logs +
Size of (y+1) days of foreign archived redo logs (for logical standby) +
Size of control file +
Size of an online redo log member * number of log groups +
Size of flashback logs (based on DB_FLASHBACK_RETENTION_TARGET value)


To determine the optimum size for the fast recovery area:
1. If you plan to use flashback logging or guaranteed restore points, then query
V$ARCHIVED_LOG to determine how much redo the database generates in the
time to which you intend to set DB_FLASHBACK_RETENTION_TARGET.


If flashback logging is enabled, then run the database under a normal workload
for the time period specified by DB_FLASHBACK_RETENTION_TARGET.
In this way, the database can generate a representative sample of flashback logs.

Query the V$FLASHBACK_DATABASE_LOG view as follows:
SELECT ESTIMATED_FLASHBACK_SIZE
FROM V$FLASHBACK_DATABASE_LOG;

The result is an estimate of the disk space needed to meet the current flashback
retention target, based on the database workload since Flashback Database was
enabled.


Oracle recommends that DB_RECOVERY_FILE_DEST be set to a different value from
DB_CREATE_FILE_DEST or any of the DB_CREATE_ONLINE_LOG_DEST_n
initialization parameters.

Friday, April 2, 2010

VALIDATE RMAN BACKUP

to validate all database files and
archived redo log files for physical and logical corruption:

RMAN> BACKUP VALIDATE CHECK LOGICAL DATABASE ARCHIVELOG ALL;

You can also use the VALIDATE command to individual data blocks:

RMAN> VALIDATE DATAFILE 4 BLOCK 10 TO 13;

You can also validate backup sets:

RMAN>VALIDATE BACKUPSET 3;

RMAN CATALOG BACKUPS

1)Catalog an archive log

RMAN> CATALOG ARCHIVELOG '/LOC/arc01.arc', '/LOC/arc02.arc';

2)Catalog a file

RMAN>CATALOG DATAFILECOPY '/LOC/DATAFILE01.dbf' LEVEL 0;

3)Catalog multiple copies in a directory

RMAN>CATALOG START WITH '/LOC/BACKUPDIR';

4)Catalog files in the flash recovery area:

RMAN>CATALOG RECOVERY AREA ;

5)Catalog backup pieces:
RMAN>CATALOG BACKUPPIECE '/LOC/PEICENAME';
EX:-
RMAN> CATALOG START WITH '+disk'; # catalog all files from an ASM disk group RMAN> CATALOG START WITH '/fs1/datafiles/'; # catalog all files in directory


RMAN> catalog start with '+RESTOREDB_FRA01/RESTOREDBDUP/ARCHIVELOG/2011_12_06';

starting full resync of recovery catalog
full resync complete
searching for all files that match the pattern +RESTOREDB_FRA01/RESTOREDBDUP/ARCHIVELOG/2011_12_06

List of Files Unknown to the Database
=====================================
File Name: +RESTOREDB_fra01/RESTOREDBDUP/ARCHIVELOG/2011_12_06/thread_3_seq_165457.3992.769173241


Do you really want to catalog the above files (enter YES or NO)? YES
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: +RESTOREDB_fra01/RESTOREDBDUP/ARCHIVELOG/2011_12_06/thread_3_seq_165457.3992.769173241
File Name: +RESTOREDB_fra01/RESTOREDBDUP/ARCHIVELOG/2011_12_06/thread_1_seq_145279.1991.769173481


RMAN>


Uncatalog Backup

RMAN>CHANGE ARCHIVELOG ALL UNCATALOG;

RMAN>CHANGE BACKUP OF TABLESPACE DATA01 UNCATALOG;

To uncatalog a backuppiece name,
RMAN>CHANGE BACKUPPIECE '/LOC/PEICENAME' UNCATALOG;

Wednesday, March 31, 2010

Change listener log

UNIX> cd /u0001/oracle/product/10.2.0/network/log
UNIX> lsnrctl set log_status off
UNIX> mv listener.log listener.old
UNIX> lsnrctl set log_status on

Tuesday, March 30, 2010

Restore Archivelog with Particular sequence

RMAN> run {
2> allocate channel dsk1 type disk;
3> restore archivelog logseq 11645;
4> release channel dsk1;
5> }

what if it is a RAC?
What if particular logseq and thread?

RMAN> run {
2> allocate channel ch01 type disk format '/ora_backup/u0006/FIPRD3/archivelog/';
3> restore archivelog logseq 24208 thread 3;
4> }

using target database control file instead of recovery catalog
allocated channel: ch01
channel ch01: sid=639 instance=FIPRD3 devtype=DISK

Starting restore at 19-APR-11

channel ch01: starting archive log restore to default destination
channel ch01: restoring archive log
archive log thread=3 sequence=24208
channel ch01: reading from backup piece /ora_backup/u0006/FIPRD3/archivelog/ARC_FIPRD3_20110417_115101_s298206_p1_6uma0fad
channel ch01: restored backup piece 1
piece handle=/ora_backup/u0006/FIPRD3/archivelog/ARC_FIPRD3_20110417_115101_s298206_p1_6uma0fad tag=TAG20110417T115252
channel ch01: restore complete, elapsed time: 00:00:05
Finished restore at 19-APR-11
released channel: ch01



restoring to a different archivelog destination than default and with particular thread log sequence:

RMAN> run{
2> set archivelog destination to '/ora_backup/u0001/cloning/';
3> restore archivelog sequence 6232 thread 3;
4> }

executing command: SET ARCHIVELOG DESTINATION
using target database control file instead of recovery catalog

Starting restore at 26-JUN-11
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=2610 instance=ICMPRD3 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=948 instance=ICMPRD3 device type=DISK
allocated channel: ORA_DISK_3
channel ORA_DISK_3: SID=2365 instance=ICMPRD3 device type=DISK
allocated channel: ORA_DISK_4
channel ORA_DISK_4: SID=3315 instance=ICMPRD3 device type=DISK
allocated channel: ORA_DISK_5
channel ORA_DISK_5: SID=1188 instance=ICMPRD3 device type=DISK

channel ORA_DISK_1: starting archived log restore to user-specified destination
archived log destination=/ora_backup/u0001/cloning/
channel ORA_DISK_1: restoring archived log
archived log thread=3 sequence=6232
channel ORA_DISK_1: reading from backup piece /ora_backup/u0004/ICMPRD3/archivelog/ARC_ICMPRD3_20110624_130406_s7190_p1_0mmfmh90
channel ORA_DISK_1: piece handle=/ora_backup/u0004/ICMPRD3/archivelog/ARC_ICMPRD3_20110624_130406_s7190_p1_0mmfmh90 tag=TAG20110624T130521
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:03
Finished restore at 26-JUN-11

Monday, March 29, 2010

Avoid RMAN-08137

I used the below script to delete the archive logs older than 30 mins so that the recent archivelogs will be available to the standby and over come the below error messages:

RMAN-08137: WARNING: archive log not deleted as it is still needed


RUN{
ALLOCATE CHANNEL ch01 TYPE DISK FORMAT '/u0001/${ORACLE_SID}/u01/%d_${BACKUP_TYPE}_%M_%D_%Y_%s_%p_%t';

BACKUP
TAG 'arch_${TODAY}_${TIME}'
FILESPERSET ${ARCHIVE_PERSET}
ARCHIVELOG ALL;
DELETE force noprompt archivelog until time '(sysdate -1/30)' backed up 1 times to device type disk;
RELEASE CHANNEL ch01;

}
EOF

Wednesday, March 10, 2010

RMAN LIST

These variables determine the format used for the time
parameters in RMAN commands such as RESTORE, RECOVER, and REPORT
export NLS_LANG=american
export NLS_DATE_FORMAT='Mon DD YYYY HH24:MI:SS'

RMAN> LIST BACKUP;
RMAN> LIST BACKUP OF DATABASE;
RMAN> LIST BACKUP OF DATABASE BY BACKUP;
RMAN> LIST BACKUP RECOVERABLE;
RMAN> LIST BACKUP SUMMARY;
RMAN> LIST INCARNATION;
RMAN> LIST BACKUP BY FILE;
RMAN> LIST COPY OF DATABASE ARCHIVELOG ALL;
RMAN> LIST COPY OF DATAFILE 1, 2, 3;
RMAN> LIST BACKUP OF DATAFILE 11 SUMMARY;
RMAN> LIST BACKUP OF ARCHIVELOG FROM SEQUENCE 1437;
RMAN> LIST CONTROLFILECOPY "/tmp/cntrlfile.copy";
RMAN> LIST BACKUPSET OF DATAFILE 1;
RMAN> LIST backup tag '04_12_09_F';
RMAN> LIST BACKUPPIECE ' ';
RMAN> LIST BACKUP OF ARCHIVELOG ;
RMAN> list backup of database completed between "to_date('06/17/2010','mm/dd/yyyy')" and "to_date('06/19/2010','mm/dd/yyyy')";
RMAN> LIST backup of database completed after "to_date('06/20/2010','mm/dd/yyyy')";


Direct the RMAN output to a LOG:

The following example appends the output from an RMAN session to a text file at
/tmp/msglog.log
% rman TARGET / LOG /tmp/msglog.log APPEND

UNIX$RMAN | tee rman_output.log

Thursday, March 4, 2010

Copy ASM files to File system

Use this with the below for clone :
SQL>ALTER DATABASE BACKUP CONTROLFILE TO TRACE;
SQL>create pfile from spfile;


Method-1:
SQL>ALTER DATABASE MOUNT;
RMNA> Connect target /
RMAN> backup as copy tablespace system format '/unix02/u0001/datafilesbackup/%U';
RMAN> backup as copy tablespace sysaux format '/unix02/u0001/datafilesbackup/%U';
exit;

Tuesday, March 2, 2010

DRM - Dynamic Resource Mastering

DRM - Dynamic Resource Mastering

When using Real application Clusters (RAC), Each instance has its own SGA and buffer cache. RAC will ensure that these block changes are co-ordinated to maximize performance and to ensure data intergrity. Each copy of the buffer also called as a cache resource has a master which is one of the nodes of the cluster.

In database releases before 10g (10.1.0.2) once a cache resource is mastered on an instance, a re-mastering or a change in the master would take place only during a reconfiguration that would happen automatically during both normal operations like instance startup or instance shutdown or abnormal events like Node eviction by Cluster Manager. So if Node B is the master of a cache resource, this resource will remain mastered on Node B until reconfiguration.

10g introduces a concept of resource remastering via DRM. With DRM a resource can be re-mastered on another node say from Node B to Node A if it is found that the cache resource is accessed more frequently from Node A. A reconfiguration is no longer the only reason for a resource to be re-mastered.

In 10gR1 DRM is driven by affinity of files and in 10gR2 it is based on objects.

Associated Bug with DRM on 10g:
Instance will terminate with the below error in alert log:

ORA-00481: LMON process terminated with error
also

Instance fails during DRM with lmon reporting kjfcdrmrfg: SYNC TIMEOUT (522650, 521749, 900), step 34
and alert log showing:
Fri Feb XX 06:02:22 2010
Errors in file /bdump/sid1_lmon_1085658.trc:
ORA-00481: LMON process terminated with error

AWR reports shows:

Event Waits %Time -outs Total Wait Time (s) Avg wait (ms) Waits /txn
gc remaster 471 97.88 898 1908 0.22


FIX :--

There is not currently a patch for bug 6960699 on AIX.
The workaround is to disable DRM by setting on all instances:
_gc_affinity_time=0
_gc_undo_affinity=FALSE

Find OSPID for SID

select vs.sid,vs.serial#,vs.username,vs.inst_id,vp.spid from gv$session vs,gv$process vp where vs.paddr=vp.addr and sid=1

To view NIC info

How to find what are all NIC's AVAILABLE TO YOU:

Solaris & AIX :
ifconfig
Linux RHES: run as root;
ifconfig -a

Oracle listener established connections

unix $> netstat -an | grep ESTABLISHED | wc -l

gives the count of number of connections being established via listener