Experience: is what you get soon after you need it.

Experience: is what you get soon after you need it.

Rasul Allah (sal Allahu alaihi wa sallam) said: "Restore the trusts of those who trust you, and deal not falsely with him who deals falsely with you." [Abu Dawud, Tirmidhi]

Search This Blog

Thursday, December 20, 2012

11gR2 -Oracle Index usage monitoring


To start monitoring the index usage:

ALTER INDEX MONITORING USAGE;


To stop:

ALTER INDEX MONITORING USAGE;


To monitor the status if you can login as owner then use

select * from v$object_usage;

If viewing the stats as other/super user then use below:

select d.username, io.name INDEX_NAME, t.name TABLE_NAME,
       decode(bitand(i.flags, 65536), 0, 'NO', 'YES') MONITORING,
       decode(bitand(ou.flags, 1), 0, 'NO', 'YES') INDEX_USED,
       ou.start_monitoring,
       ou.end_monitoring
from sys.obj$ io, sys.obj$ t, sys.ind$ i, sys.object_usage ou,
     dba_users d
where io.owner# = d.user_id
  AND d.username = 'CN'
  and i.obj# = ou.obj#
  and io.obj# = ou.obj#
  and t.obj# = i.bo#;
 
Ex:-



Tuesday, December 18, 2012

Change Concurrent manager for a Program

How to change a concurrent manager for a program.

Ex:-
1) exclude the program from the standard manager:
SYSADMIN--> Concurrent --> Manger -->Define


 

--> Specialization rules:




Exclude the Program:



Now go to the customer Manager and include the program:

Monday, December 17, 2012

Find Exadata IO saved by smart scan and offload to cells

Find amount of I/O saved by smart scan vs Total I/Oor
Find amount of I/O transported from cell to DB or
Find amount of I/O saved by storage index or
Find amount of I/O saved by predicate offload

Image version: 11.2.3.1.0.120304


Verify what are all the functions and operators qualify for smart_scan
from V$SQLFN_METADATA.



Example-1:

select sid,value/1024/1024 IO_MB,name from v$sesstat st,v$statname sn
where st.statistic#=sn.statistic#
and st.sid=1
and (sn.name like 'cell physical%' or sn.name like 'cell io%' or sn.name like 'physical%total bytes')


INST_ID    SID    IO_IN_MB    STAT_NAME
=======================================================
3    676    0    physical read total bytes
3    676    0    physical write total bytes
3    676    0    cell physical IO interconnect bytes
3    676    0    cell physical IO bytes saved during optimized file creation
3    676    0    cell physical IO bytes saved during optimized RMAN file restore
3    676    0    cell physical IO bytes eligible for predicate offload
3    676    0    cell physical IO bytes saved by storage index
3    676    0    cell physical IO bytes sent directly to DB node to balance CPU
3    676    0    cell physical IO interconnect bytes returned by smart scan



SQL> select count(*) from cn.LINES_API_ALL where processed_period_id< 2011001;

  COUNT(*)
----------
   4300815
Elapsed: 00:00:44.54

Execution Plan
----------------------------------------------------------
Plan hash value: 616062978


Predicate Information (identified by operation id):
---------------------------------------------------

   2 - storage("PROCESSED_PERIOD_ID"<2011001 br="br">       filter("PROCESSED_PERIOD_ID"<2011001 br="br">




INST_ID    SID    IO_IN_MB    STAT_NAME
=======================================================
3    676    6873.57    physical read total bytes
3    676    0    physical write total bytes
3    676    6873.57    cell physical IO interconnect bytes
3    676    0    cell physical IO bytes saved during optimized file creation
3    676    0    cell physical IO bytes saved during optimized RMAN file restore
3    676    0    cell physical IO bytes eligible for predicate offload
3    676    0    cell physical IO bytes saved by storage index
3    676    0    cell physical IO bytes sent directly to DB node to balance CPU
3    676    0    cell physical IO interconnect bytes returned by smart scan

Now force it to use direct path reads by using parallel hint.

SQL>  set timing on echo on linesize 1000 pages 300
SQL> set autot trace exp stat
SQL>  select /*+ PARALLEL(T,64) */count(*) from cn.LINES_API_ALL T where processed_period_id< 2011001;

Elapsed: 00:00:01.53

Execution Plan
----------------------------------------------------------
Plan hash value: 859133999



Predicate Information (identified by operation id):
---------------------------------------------------

   6 - storage("PROCESSED_PERIOD_ID"<2011001 br="br">       filter("PROCESSED_PERIOD_ID"<2011001 br="br">

Statistics
----------------------------------------------------------
        192  recursive calls
       9920  db block gets
   37668038  consistent gets
   37347799  physical reads
          0  redo size
        529  bytes sent via SQL*Net to client
        520  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed


INST_ID    SID    IO_IN_MB    STAT_NAME
=======================================================
3    676    291779.69    physical read total bytes
3    676    0                  physical write total bytes
3    676    92.42            cell physical IO interconnect bytes
3    676    0                  cell physical IO bytes saved during optimized file creation
3    676    0                     cell physical IO bytes saved during optimized RMAN file restore
3    676    291779.68    cell physical IO bytes eligible for predicate offload
3    676    279727.9    cell physical IO bytes saved by storage index
3    676    0                     cell physical IO bytes sent directly to DB node to balance CPU
3    676    92.41            cell physical IO interconnect bytes returned by smart scan


From the above it is clear that smart scan offloaded the IO to the cell and transported only
the necessary IO to the db server i.e around 92 mb and overall elapsed time is apprx 2 secs down from 44 secs.

Example-2

Create tables forces direct path reads, lets check an example:

SQL> drop table sshaik_api_all;

Table dropped.

SQL> alter system flush buffer_cache;

System altered.

SQL> alter system flush shareD_pool;

System altered.




SQL> alter session set cell_offload_processing=false;

Session altered.

Elapsed: 00:00:00.00
SQL> create table sshaik_api_all as select * from cn.LINES_API_ALL where processed_period_id<=2011002;

Table created.

Elapsed: 00:05:27.32

No smart scan or cell offloading and it took around 5mins 27 secs

INST_ID    SID    IO_IN_MB    STAT_NAME
=======================================================
3    112    298494.52    physical read total bytes
3    112    6697.1             physical write total bytes
3    112    311888.73    cell physical IO interconnect bytes
3    112    0                    cell physical IO bytes saved during optimized file creation
3    112    0                    cell physical IO bytes saved during optimized RMAN file restore
3    112    0                   cell physical IO bytes eligible for predicate offload
3    112    0                    cell physical IO bytes saved by storage index
3    112    0                   cell physical IO bytes sent directly to DB node to balance CPU
3    112    0                    cell physical IO interconnect bytes returned by smart scan




SQL> alter system flush buffer_cache;

System altered.

Elapsed: 00:00:00.20
SQL> alter system flush shared_pool;

System altered.

Elapsed: 00:00:00.07
SQL> drop table sshaik_api_all;

Table dropped.

Elapsed: 00:00:00.81
SQL> purge dba_recyclebin;

DBA Recyclebin purged.



Elapsed: 00:00:00.00
SQL> alter session set cell_offload_processing=true;

Session altered.

Elapsed: 00:00:00.00
SQL> show parameter cell_offload_processing

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
cell_offload_processing              boolean     TRUE


SQL> create table sshaik_api_all as select * from cn.LINES_API_ALL where processed_period_id<=2011002;

Table created.



Elapsed: 00:01:21.11

With smart scan and cell offloading, it took around 1min 21 secs

Plan hash value: 3947898682



Predicate Information (identified by operation id):
---------------------------------------------------

   2 - storage("PROCESSED_PERIOD_ID"<=2011002)
       filter("PROCESSED_PERIOD_ID"<=2011002)



INST_ID    SID    IO_IN_MB    STAT_NAME
=======================================================
3    112    298615.73    physical read total bytes
3    112    20694.93    physical write total bytes
3    112    19844.59    cell physical IO interconnect bytes
3    112    13966            cell physical IO bytes saved during optimized file creation
3    112    0                    cell physical IO bytes saved during optimized RMAN file restore
3    112    312442.61    cell physical IO bytes eligible for predicate offload
3    112    94806.78       cell physical IO bytes saved by storage index
3    112    0                    cell physical IO bytes sent directly to DB node to balance CPU
3    112    6219.6            cell physical IO interconnect bytes returned by smart scan


SQL> select count(*) from sshaik_api_all;

  COUNT(*)
----------
  11968704

Monday, December 10, 2012

oracle Move datafile into ASM


How to move a datafile that was created on the OS to ASM.

Issue:
Datafile created on the OS instead of in ASM

Given:
RAC 3 nodes
tablespace has multiple datafiles 

 Check the status of the file:
select df.file#,to_char(df.creation_time,'mm-dd-yyyy hh24:mi:ss') created,df.name,ts.name,df.status from v$datafile df,v$tablespace ts where df.ts#=ts.ts# and df.file#=127

FILE#    CREATION_TIME        NAME                TS_NAME        STATUS
127    12-10-2012 05:06:27    dbhome_1/dbs/DBNAME_DATA_01    APPS_TS_TX_IDX    ONLINE


RMAN> connect target /

connected to target database: DBNAME (DBID=2919937482)

RMAN> copy datafile 127 to '+DBNAME_DATA_01';

Starting backup at 10-DEC-12
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=854 instance=DBNAME1 device type=DISK
allocated channel: ORA_DISK_2

channel ORA_DISK_1: starting datafile copy
input datafile file number=00127 name=/icm01/u0001/app/oracle/product/11.2.0/dbhome_1/dbs/DBNAME_DATA_01
output file name=+DBNAME_DATA_01/DBNAME/datafile/apps_ts_tx_idx.480.801661731 tag=TAG20121210T114844 RECID=54 STAMP=801661781
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:56
Finished backup at 10-DEC-12

Starting Control File and SPFILE Autobackup at 10-DEC-12
piece handle=+DBNAME_FRA_01/DBNAME/autobackup/2012_12_10/s_801661785.5682.801661787 comment=NONE
Finished Control File and SPFILE Autobackup at 10-DEC-12

RMAN> switch datafile 127 to copy;

datafile 127 switched to datafile copy "+DBNAME_DATA_01/DBNAME/datafile/apps_ts_tx_idx.480.801661731"

RMAN> exit


Recovery Manager complete.


SQL*Plus: Release 11.2.0.1.0 Production on Mon Dec 10 11:50:32 2012

Copyright (c) 1982, 2009, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options


FILE#    CREATION_TIME        NAME                TS_NAME        STATUS
127    12-10-2012 05:06:27    +apps_ts_tx_idx.480.801661731   APPS_TS_TX_IDX    RECOVER


>rman

Recovery Manager: Release 11.2.0.1.0 - Production on Mon Dec 10 11:51:22 2012

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

RMAN> connect target /

connected to target database: DBNAME (DBID=2919937482)

RMAN> recover datafile 127;

Starting recover at 10-DEC-12
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=3679 instance=DBNAME1 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=3955 instance=DBNAME1 device type=DISK
allocated channel: ORA_DISK_3
channel ORA_DISK_3: SID=4524 instance=DBNAME1 device type=DISK
allocated channel: ORA_DISK_4
channel ORA_DISK_4: SID=4799 instance=DBNAME1 device type=DISK
allocated channel: ORA_DISK_5
channel ORA_DISK_5: SID=5082 instance=DBNAME1 device type=DISK

starting media recovery
media recovery complete, elapsed time: 00:00:07

Finished recover at 10-DEC-12

RMAN> exit


Recovery Manager complete.


SQL*Plus: Release 11.2.0.1.0 Production on Mon Dec 10 11:52:06 2012

Copyright (c) 1982, 2009, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options


FILE#    CREATION_TIME        NAME                TS_NAME        STATUS
127    12-10-2012 05:06:27    +apps_ts_tx_idx.480.801661731   APPS_TS_TX_IDX    OFFLINE


SQL> alter database datafile 127 online;

Database altered.

SQL>

FILE#    CREATION_TIME        NAME                TS_NAME        STATUS
127    12-10-2012 05:06:27    +apps_ts_tx_idx.480.801661731   APPS_TS_TX_IDX    ONLINE


DBNAMEb01cdp(DBNAME1)  /icm01/u0001/app/oracle/product/11.2.0/dbhome_1/dbs
>rm DBNAME_DATA_01

Saturday, November 24, 2012

ORA-01502: index or partition of such index is in unusable state

Exploring ORA-01502 error and why we usually get this error message. I am not explaining why/how the index status changed to unusable( mostly due to the table move and alter index xxxx unusable ..)

You will get

ORA-01502: index  or partition of such index is in unusable state

If you have the parameter skip_unusable_indexes= false  then it makes sense that oracle reported this error during DML activity.

If you don't care about the unusable indexes during DML and you want the optimizer to choose different(expensive) execution plans during SELECT then you can set the parameter skip_unusable_indexes= true at the instance level or at the session level and move on.

what if you got this error even when you have the paramater skip_unusable_indexes set to true at the instance level..

i.e
SQL> show parameter skip

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
skip_unusable_indexes                boolean     TRUE

SKIP_UNUSABLE_INDEXES enables or disables the use and reporting of tables with unusable indexes or index partitions. If a SQL statement uses a hint that forces the usage of an unusable index, then this hint takes precedence over initialization parameter settings, including SKIP_UNUSABLE_INDEXES. If the optimizer chooses an unusable index, then an ORA-01502 error will result. (See Oracle Database Administrator's Guide for more information about using hints.)

Values:   true

    Disables error reporting of indexes and index partitions marked UNUSABLE. This setting allows all operations (inserts, deletes, updates, and selects) on tables with unusable indexes or index partitions.




and you still got the error during DML activity on this table, why?

SQL> insert into skip_index (salesrep_dim_pk) values (55555);
insert into skip_index (salesrep_dim_pk) values (55555)
*
ERROR at line 1:
ORA-01502: index 'SYS.SKIP_INDEX_UNIQUE' or partition of such index is in unusable state

Example:
SQL> create table TEST_TABLE as select * from  cn.cn_d_salesreps;

Table created.

SQL> CREATE UNIQUE INDEX TEST_INDEX_UNIQUE ON SKIP_INDEX (SALESREP_DIM_PK);

Index created.

SQL> commit;

Commit complete.

SQL> select sum(bytes) from dba_segments where segment_name='SKIP_INDEX_UNIQUE';

SUM(BYTES)
----------
    196608

SQL> select * from TEST_TABLE where salesrep_dim_pk =95056;

Execution Plan
----------------------------------------------------------
Plan hash value: 684699485

-------------------------------------------------------------------------------------------------
| Id  | Operation                   | Name              | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |                   |     1 |  1470 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| TEST_TABLE        |     1 |  1470 |     2   (0)| 00:00:01 |
|*  2 |   INDEX UNIQUE SCAN         | TEST_INDEX_UNIQUE |     1 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("SALESREP_DIM_PK"=95056)

Above explain plan shows the optimizer is using the index.

Now insert some data into the data..some DML...

SQL> insert into skip_index (salesrep_dim_pk) values (55555);

1 row created.

SQL> delete from skip_index where salesrep_dim_pk=55555;

1 row deleted.

SQL> commit;

Commit complete.


****** Now mark the index UNUSABLE *****
SQL> alter index TEST_INDEX_UNIQUE unusable;

Index altered.

SQL> select sum(bytes) from dba_segments where segment_name='TEST_INDEX_UNIQUE';

SUM(BYTES)
----------
    196608
***** This is bad even though the index is unusable the segments were not dropped for the index ***
****The default behaviour is oracle drops the segments for an unusable index *****



SQL>  select * from skip_index where salesrep_dim_pk =95056;

Execution Plan
----------------------------------------------------------
Plan hash value: 74755328

----------------------------------------------------------------------------------------
| Id  | Operation                 | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT          |            |   151 |   216K|    10   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS STORAGE FULL| TEST_TABLE |   151 |   216K|    10   (0)| 00:00:01 |
----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - storage("SALESREP_DIM_PK"=95056)
       filter("SALESREP_DIM_PK"=95056)

****As expected the access path changed from INDEX to FULL TABLE SCAN as the index is unusable******

***** try doing some DML ********

SQL> insert into TEST_TABLE (salesrep_dim_pk) values (55555);
insert into TEST_TABLE (salesrep_dim_pk) values (55555)
*
ERROR at line 1:
ORA-01502: index 'TEST_INDEX_UNIQUE' or partition of such index is in unusable state

***Since the index is a non unique index oracle will not allows us to do any dml activity on the underlying table
on which the index is unusable and will not drop the index segments either ****

  Note:
    If an index is used to enforce a UNIQUE constraint on a table, then allowing insert and update operations on the table might violate the constraint. Therefore, this setting does not disable error reporting for unusable indexes that are unique.



SQL> alter index TEST_INDEX_UNIQUE rebuild;

Index altered.

SQL> select sum(bytes) from dba_segments where segment_name='TEST_INDEX_UNIQUE';

SUM(BYTES)
----------
    196608

SQL> insert into TEST_TABLE (salesrep_dim_pk) values (55555);

1 row created.

SQL> commit;

Commit complete.


where as non unique unusable index doesn't throw this error.

Create table TEST_TABLE as select * from my_table;

****Created non-unique index here ********
create index TEXT_IDX on TEST_TABLE(customer_num,customer_name);

commit;

SQL> select customer_num from TEST_TABLE where customer_num='8.1502701000322E15';

Execution Plan
----------------------------------------------------------
Plan hash value: 3224766131

-----------------------------------------------------------------------------
| Id  | Operation        | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT |          |    12 |    96 |     4   (0)| 00:00:01 |
|*  1 |  INDEX RANGE SCAN| TEXT_IDX |    12 |    96 |     4   (0)| 00:00:01 |
-----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("CUSTOMER_NUM"=8150270100032200)

SQL> select wo_num from TEST_TABLE where  wo_num='1.00037856314112E15';

Execution Plan
----------------------------------------------------------
Plan hash value: 3979868219

----------------------------------------------------------------------------------------
| Id  | Operation                 | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT          |            |     7 |    49 |   317K  (2)| 00:16:09 |
|*  1 |  TABLE ACCESS STORAGE FULL| TEST_TABLE |     7 |    49 |   317K  (2)| 00:16:09 |
----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - storage("WO_NUM"=1000378563141120)
       filter("WO_NUM"=1000378563141120)

SQL> ALTER INDEX TEXT_IDX unusable;

Index altered.



SQL> select index_name,status from dba_indexes where index_name='TEXT_IDX';

INDEX_NAME                     STATUS
------------------------------ --------
TEXT_IDX                       UNUSABLE


SQL> select sum(bytes)/1024/1024/1024 from dba_segments where segment_name='TEXT_IDX';

SUM(BYTES)/1024/1024/1024
-------------------------



SQL> exec dbms_stats.gather_table_stats('MYSCHEMA','TEST_TABLE',estimate_percent=>dbms_stats.auto_sample_size,method_opt=>'FOR ALL COLUMNS SIZE 1',cascade=>TRUE);

PL/SQL procedure successfully completed.


SQL> update TEST_TABLE set customer_num='1234567890' where customer_num='8.1502701000322E15';

6 rows updated.

SQL> alter index TExt_idx rebuild;


Index altered.

SQL> SQL>
SQL> commit;

Commit complete.

SQL> select sum(bytes)/1024/1024/1024 from dba_segments where segment_name='TEXT_IDX';

SUM(BYTES)/1024/1024/1024
-------------------------
               2.00976563


Friday, October 12, 2012

EXADATA X2-2 Creating DBFS fliesystem/share on EXADATA X2-2

 Environment:
EXADATA X2-2
OS: Linux
Image version: 11.2.3.2.0.120713
DB - 11.2.0.3

For Linux database servers, there are several steps to perform as root. Solaris database servers do not require this step and can

skip it. First, add the oracle user to the fuse group on Linux.  Run these commands as the root user.

(root)# dcli -g ~/dbs_group -l root usermod -a -G fuse oracle

Create the /etc/fuse.conf file with the user_allow_other option. Ensure proper privileges are applied to this file.

(root)# dcli -g ~/dbs_group -l root "echo user_allow_other > /etc/fuse.conf"
(root)# dcli -g ~/dbs_group -l root chmod 644 /etc/fuse.conf


For all database servers, create an empty directory that will be used as the mount point for the DBFS filesystem.

(root)# dcli -g ~/dbs_group -l root mkdir /dbfs_direct


[root@exadb01 onecommand]# pwd
/opt/oracle.SupportTools/onecommand

[root@exadb01 onecommand]# dcli -g dbs_group -l root mkdir /dbfs_share
[root@exadb01 onecommand]# dcli -g dbs_group -l root chown oracle:dba /dbfs_share
[root@exadb01 onecommand]#


exadb01.(MYDB1)  /home/oracle
>dba

SQL*Plus: Release 11.2.0.3.0 Production on Fri Oct 12 23:41:06 2012

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> create bigfile tablespace dbfsts datafile '+DBFS_DG' size 32g autoextend on next 8g maxsize 300g NOLOGGING EXTENT MANAGEMENT

LOCAL AUTOALLOCATE  SEGMENT SPACE MANAGEMENT AUTO ;

Tablespace created.


QL>
SQL>  create user dbfs_user identified by default tablespace dbfsts quota unlimited on dbfsts;
SQL> grant create session, create table, create view, create procedure, dbfs_role to dbfs_user;


SQL> conn dbfs_user/
Connected.
SQL> start dbfs_create_filesystem dbfsts FS1
No errors.
--------
CREATE STORE:
begin dbms_dbfs_sfs.createFilesystem(store_name => 'FS_FS1', tbl_name =>
'T_FS1', tbl_tbs => 'dbfsts', lob_tbs => 'dbfsts', do_partition => false,
partition_key => 1, do_compress => false, compression => '', do_dedup => false,
do_encrypt => false); end;
--------
REGISTER STORE:
begin dbms_dbfs_content.registerStore(store_name=> 'FS_FS1', provider_name =>
'sample1', provider_package => 'dbms_dbfs_sfs'); end;
--------
MOUNT STORE:
begin dbms_dbfs_content.mountStore(store_name=>'FS_FS1', store_mount=>'FS1');
end;
--------
CHMOD STORE:
declare m integer; begin m := dbms_fuse.fs_chmod('/FS1', 16895); end;
No errors.


copy the script:

rw-r--r-- 1 oracle   oinstall 11592 Oct 13 00:09 mount-dbfs.sh

[root@exadb01 tmp]# dos2unix mount-dbfs.sh mount-dbfs.sh
dos2unix: converting file mount-dbfs.sh to UNIX format ...
dos2unix: converting file mount-dbfs.sh to UNIX format ...
[root@exadb01 tmp]#


Edit the variable settings in the top of the script for your environment. Edit or confirm the settings for the following

variables in the script. Comments in the script will help you to confirm the values for these variables.

    DBNAME
    MOUNT_POINT
    DBFS_USER
    ORACLE_HOME (should be the RDBMS ORACLE_HOME directory)
    LOGGER_FACILITY (used by syslog to log the messages/output from this script)
    MOUNT_OPTIONS
    DBFS_PASSWD (used only if WALLET=false)
    DBFS_PWDFILE_BASE (used only if WALET=false)
    WALLET (must be true or false)
    TNS_ADMIN (used only if WALLET=true)
    DBFS_LOCAL_TNSALIAS

After editing, copy the script (rename it if desired or needed) to the proper directory (GI_HOME/crs/script) on database nodes

and set proper permissions on it, as the root user:


[root@exadb01 onecommand]# dcli -g dbs_group -l root -d /u01/app/11.2.0.3/grid/crs/script -f /tmp/mount-dbfs.sh
[root@exadb01 onecommand]# dcli -g dbs_group -l root chown oracle:dba /u01/app/11.2.0.3/grid/crs/script/mount-dbfs.sh
[root@exadb01 onecommand]#  dcli -g dbs_group -l root chmod 750 /u01/app/11.2.0.3/grid/crs/script/mount-dbfs.sh



With the appropriate preparation steps for one of the two mount methods complete, the Clusterware resource for DBFS mounting can

now be registered. Register the Clusterware resource by executing the following as the RDBMS owner of the DBFS repository

database (typically "oracle") user. The ORACLE_HOME and DBNAME should reference your Grid Infrastructure ORACLE_HOME directory

and your DBFS repository database name, respectively. If mounting multiple filesystems, you may also need to modify the

ACTION_SCRIPT and RESNAME. For more information, see section below regarding Creating and Mounting Multiple DBFS Filesystems.

Create this short script and run it as the RDBMS owner (typically "oracle") on only one database server in your cluster.

##### start script add-dbfs-resource.sh
#!/bin/bash
ACTION_SCRIPT=/u01/app/11.2.0/grid/crs/script/mount-dbfs.sh
RESNAME=dbfs_mount
haanDBNAME=fsdb
DBNAMEL=`echo $DBNAME | tr A-Z a-z`
ORACLE_HOME=/u01/app/11.2.0/grid
PATH=$ORACLE_HOME/bin:$PATH
export PATH ORACLE_HOME
crsctl add resource $RESNAME \
  -type local_resource \
  -attr "ACTION_SCRIPT=$ACTION_SCRIPT, \
         CHECK_INTERVAL=30,RESTART_ATTEMPTS=10, \
         START_DEPENDENCIES='hard(ora.$DBNAMEL.db)pullup(ora.$DBNAMEL.db)',\
         STOP_DEPENDENCIES='hard(ora.$DBNAMEL.db)',\
         SCRIPT_TIMEOUT=300"
##### end script add-dbfs-resource.sh

Change it to:
##### start script add-dbfs-resource.sh
#!/bin/bash
ACTION_SCRIPT=/u01/app/11.2.0.3/grid/crs/script/mount-dbfs.sh
RESNAME=dbfs_share
DBNAME=MYDB
DBNAMEL=`echo $DBNAME | tr A-Z a-z`
ORACLE_HOME=/u01/app/11.2.0.3/grid
PATH=$ORACLE_HOME/bin:$PATH
export PATH ORACLE_HOME
crsctl add resource $RESNAME \
  -type local_resource \
  -attr "ACTION_SCRIPT=$ACTION_SCRIPT, \
         CHECK_INTERVAL=30,RESTART_ATTEMPTS=10, \
         START_DEPENDENCIES='hard(ora.$DBNAMEL.db)pullup(ora.$DBNAMEL.db)',\
         STOP_DEPENDENCIES='hard(ora.$DBNAMEL.db)',\
         SCRIPT_TIMEOUT=300"
##### end script add-dbfs-resource.sh
~






>vi add-dbfs-resource.sh
exadb01.(MYDB1)  /home/oracle/sshaik
>sh ./add-dbfs-resource.sh



exadb01.(MYDB1)  /home/oracle/sshaik
>srvctl stop database -d MYDB -f
exadb01.(MYDB1)  /home/oracle/sshaik
>srvctl start database -d MYDB


>crsctl stat res dbfs_share -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
dbfs_share
               OFFLINE OFFLINE      exadb01
               OFFLINE OFFLINE      exadb02
               OFFLINE OFFLINE      exadb03
               OFFLINE OFFLINE      exadb04

exadb01.(MYDB1)  /home/oracle/sshaik
> /u01/app/11.2.0.3/grid/bin/crsctl start resource dbfs_share
CRS-2672: Attempting to start 'dbfs_share' on 'exadb01'
CRS-2672: Attempting to start 'dbfs_share' on 'exadb02'
CRS-2672: Attempting to start 'dbfs_share' on 'exadb04'
CRS-2672: Attempting to start 'dbfs_share' on 'exadb03'
CRS-2676: Start of 'dbfs_share' on 'exadb01' succeeded
CRS-2676: Start of 'dbfs_share' on 'exadb04' succeeded
CRS-2676: Start of 'dbfs_share' on 'exadb03' succeeded
CRS-2676: Start of 'dbfs_share' on 'exadb02' succeeded


exadb01.(MYDB1)  /home/oracle/sshaik
> /u01/app/11.2.0.3/grid/bin/crsctl stat res dbfs_share -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
dbfs_share
               ONLINE  ONLINE       exadb01
               ONLINE  ONLINE       exadb02
               ONLINE  ONLINE       exadb03
               ONLINE  ONLINE       exadb04




    To unmount DBFS on all nodes, run this as the oracle user:
    (oracle)$ /bin/crsctl stop res dbfs_mount

    Note the following regarding restarting the database now that the dependencies have been added between the dbfs_mount

resource and the DBFS repository database resource.

    Note: After creating the dbfs_mount resource, in order to stop the DBFS repository database when the dbfs_mount resource is

ONLINE, you will have to specify the force flag when using srvctl. For example: "srvctl stop database -d fsdb -f". If you do not

specify the -f flag, you will receive an error like this:

    (oracle)$ srvctl stop database -d fsdb
    PRCD-1124 : Failed to stop database fsdb and its services
    PRCR-1065 : Failed to stop resource (((((NAME STARTS_WITH ora.fsdb.) && (NAME ENDS_WITH .svc)) && (TYPE == ora.service.type))

&& ((STATE != OFFLINE) || (TARGET != OFFLINE))) || (((NAME == ora.fsdb.db) && (TYPE == ora.database.type)) && (STATE !=

OFFLINE)))
    CRS-2529: Unable to act on 'ora.fsdb.db' because that would require stopping or relocating 'dbfs_mount', but the force option

was not specified

    Using the -f flag allows a successful shutdown and results in no output.

    Also note that once the dbfs_mount resource is started and then the database it depends on is shut down as shown above (with

the -f flag), the database will remain down. However, if Clusterware is then stopped and started, because the dbfs_mount resource

still has a target state of ONLINE, it will cause the database to be started automatically when normally it would have remained

down. To remedy this, ensure that dbfs_mount is taken offline (crsctl stop resource dbfs_mount) at the same time the DBFS

database is shutdown.








offnote:-

Note that the "crsctl stop cluster -all" syntax may not be used as it leaves ohasd running and Solaris database hosts require it

to be restarted for the workaround to take effect.

ORA-15204: database version is incompatible with diskgroup while creating the database.

While creating the database via DBCA you may see the below error:




or get the below error:
"ORA-15204: database version 11.2.0.0.0 is incompatible with diskgroup DBFS_DG" while creating the database.

Fix:

Change the compatible parameter value in the dbca templates:

go to $ORACLE_HOME/assistants/dbca/templates

>vi New_Database.dbt  or  General_Purpose.dbc <= depends on which template you are using.

Change the below from
         initParam name="compatible" value="11.2.0.0.0"
Change the below from
         initParam name="compatible" value="11.2.0.3.0"