If several crashes occur that have an associated large core file, then this is a good indication of a potential memory leak.
At any stage if you want to generate the core dump of an active process:
On Solaris:
-0 option is to manually give the path and filename for the core dump
gcore -o /obishared/obiqb/Lognode2/core.19199 19199
gcore: /obishared/obiqb/Lognode2/core.19199.19199 dumped
Start collecting memory consumption statistics.
Solaris
The prstat command can be used to gather basic performance data to help identify if there are any processes consuming a large amount
of memory. For example, the data below is sorted by the SIZE column. SIZE is the total virtual memory size of the process:
prstat -s size
AIX
The ps command can be used to show basic memory usage per process. For example, the data below is sorted by the VSZ column, the total
virtual memory size of the process in KB:
ps -efo "vsz,pid,user,cpu,thcount,comm" | sort –n
HP-UX
The top command can be used to show basic memory usage per process. For example, the SIZE column is the total virtual memory size of
the process in KB:
top
Linux
The ps command can be used to show basic memory usage per process. For example, the data below shows the memory being used by all the
siebmtshmw proceses in KB:
ps axo user,pid,vsz,thcount,cmd | grep siebmtshmw
If you are experiencing some of the symptoms described above or if you suspect a memory leak, it is particularly important that
performance data is captured so the memory leak can be confirmed. It is important to gather at least the following pieces of
information:
Process ID (PID)
Size of the process
The executing command
The timestamp information showing exactly when the data was captured.
One method of capturing this data is to use a shell script. The following are examples of shell scripts that can be used for the
various OS types:
Solaris
while true
do
for pid in `cat pids`
do var=`date|cut -d' ' -f4`
echo "$var :\c" >> ps.log.$pid
ps -eo pid,vsz,rss,pcpu,args | grep $pid |grep -v grep >> ps.log.$pid
done
sleep 30
done
AIX
while true
do
for pid in `cat pids`
do var=`date|cut -d' ' -f4`
echo "$var :\c" >> ps.log.$pid
ps -efo "pid,vsz,user,cpu,thcount,comm" | grep $pid |grep -v grep >> ps.log.$pid
done
sleep 30
done
HP-UX
while true
do
for pid in `cat pids`
do var=`date|cut -d' ' -f4`
echo "$var :\c" >> ps.log.$pid
ps -elf | grep $pid >> ps.log.$pid
done
sleep 30
done
Linux
while true
do
for pid in `cat pids`
do var=`date|cut -d' ' -f4`
echo "$var :\c" >> ps.log.$pid
ps axo pid,vsz,user,%cpu,thcount,cmd | grep $pid |grep -v grep >> ps.log.$pid
done
sleep 30
done
above are snippets from MOS.
References:
Note 477520.1 "How To Troubleshoot Siebel Server Component Crashes on UNIX". It is however possible for a process to crash as a result of a memory leak.
Note 477004.1 "How Can Users Prevent Core Files from Being Overwritten on UNIX Platforms?"
Experience: is what you get soon after you need it.
Experience: is what you get soon after you need it.
My Cloud Certifications:
Certified Kubernetes Administrator (CKA)
Cloud Certified Security Professional (ISC2)
CyberSecurity Certified Professional (ISC2)
AWS Certified Solutions Architect Associate
Azure Certified Architect Expert
Azure Certified Architect
Azure Certified Administrator
Oracle Cloud Infrastructure 2018 Certified Architect Associate.
Oracle Cloud Infrastructure Classic 2018 Certified Architect Associate.
Oracle Database Cloud Administrator Certified Professional.
Oracle Database Cloud Service Operations Certified Associate.
Search This Blog
Friday, July 29, 2011
Thursday, July 28, 2011
Tuesday, July 26, 2011
Oracle 11g jobs enabled by default .
check the status of the jobs from the below query :
select * from DBA_AUTOTASK_CLIENT
and if the they are enabled and you don't need these jobs, try the below commands to disable them.
SQL> BEGIN
DBMS_AUTO_TASK_ADMIN.disable( 2
3 client_name => 'auto optimizer stats collection',
4 operation => NULL,
5 window_name => NULL);
6 end;
7 /
PL/SQL procedure successfully completed.
SQL> BEGIN
2 DBMS_AUTO_TASK_ADMIN.disable( client_name => 'auto space advisor', operation => NULL,window_name => NULL);
3 end;
4 /
PL/SQL procedure successfully completed.
SQL> BEGIN
2 DBMS_AUTO_TASK_ADMIN.disable( client_name => 'sql tuning advisor', operation => NULL,window_name => NULL);
3 end;
4 /
PL/SQL procedure successfully completed.
SQL> commit;
Commit complete.
SQL>
and now again check the status.
select * from DBA_AUTOTASK_CLIENT
and if the they are enabled and you don't need these jobs, try the below commands to disable them.
SQL> BEGIN
DBMS_AUTO_TASK_ADMIN.disable( 2
3 client_name => 'auto optimizer stats collection',
4 operation => NULL,
5 window_name => NULL);
6 end;
7 /
PL/SQL procedure successfully completed.
SQL> BEGIN
2 DBMS_AUTO_TASK_ADMIN.disable( client_name => 'auto space advisor', operation => NULL,window_name => NULL);
3 end;
4 /
PL/SQL procedure successfully completed.
SQL> BEGIN
2 DBMS_AUTO_TASK_ADMIN.disable( client_name => 'sql tuning advisor', operation => NULL,window_name => NULL);
3 end;
4 /
PL/SQL procedure successfully completed.
SQL> commit;
Commit complete.
SQL>
and now again check the status.

Oracle APPS Version?
How do I check which version of Apps I am using ?
SQL> select RELEASE_NAME from apps.fnd_product_groups;
RELEASE_NAME
--------------------------------------------------
12.1.2
SQL> select RELEASE_NAME from apps.fnd_product_groups;
RELEASE_NAME
--------------------------------------------------
12.1.2
Monday, July 25, 2011
OPatch lsinventory hang
OPatch lsinventory just hangs:
recent changes : cloned from RAC to stand alone server.
One of the things we forgot after cloning is making changes in the ORACLE_HOME inventory file:
test case:
export OPATCH_DEBUG=TRUE
$opatch lsinventory -all
shows the old Oracle server names:
Rac::getInstance(): local node is "ourdbdrdb01cdc"
Rac::getRemoteNodes()
No cmd. arg. given, return clusterNodes detected as remoteNodes.
Remote nodes excluding local node:
"ourdbprdb01cdp""ourdbprdb02cdp""ourdbprdb03cdp" <==== these are prod server names still reflecting on the DEV lsinventory:
Rac::isCFS()
So removed the cluster_info from the below file: (backup the file before making any changes)
$ORACLE_HOME/inventory/ContentsXML/oraclehomeproperties.xml
Reran the opatch after removing the cluster info from xml file and it succeeded this time..
--------------------------------------------------------------------------------
OPatch succeeded.
recent changes : cloned from RAC to stand alone server.
One of the things we forgot after cloning is making changes in the ORACLE_HOME inventory file:
test case:
export OPATCH_DEBUG=TRUE
$opatch lsinventory -all
shows the old Oracle server names:
Rac::getInstance(): local node is "ourdbdrdb01cdc"
Rac::getRemoteNodes()
No cmd. arg. given, return clusterNodes detected as remoteNodes.
Remote nodes excluding local node:
"ourdbprdb01cdp""ourdbprdb02cdp""ourdbprdb03cdp" <==== these are prod server names still reflecting on the DEV lsinventory:
Rac::isCFS()
So removed the cluster_info from the below file: (backup the file before making any changes)
$ORACLE_HOME/inventory/ContentsXML/oraclehomeproperties.xml
Reran the opatch after removing the cluster info from xml file and it succeeded this time..
--------------------------------------------------------------------------------
OPatch succeeded.
Subscribe to:
Posts (Atom)