Thursday, 31 January 2013

spool command

The SPOOL command is used to echo Server Manager's output to a text file. Spooling output to a file is a good thing to do whenever you run a script, or any other command that produces a lot of output. It enables you to review the output later, without having to worry about it scrolling off the top of the screen.
To begin writing output to a file, use the SPOOL command followed by the name of the file to which you want to write the output. You can specify the full directory path of the file if you like. The following examples will work on Windows NT and UNIX respectively: 
SPOOL c:\output.txt
SPOOL $HOME/output.txt
All the output will now be written to a file named output.txt in the root directory of your C drive (Windows NT), or to your home directory (UNIX). To stop spooling, use the following command:
I use the SPOOL command whenever I run a critical script, or whenever I execute a critical set of commands, because it gives me a record that I can turn to later if I find out that something did not go as expected.
Spooling Results
Many times you will find it helpful to spool the results of you Server Manager session. We find it especially helpful to capture the output of the show parameters command; you can then use this output for your first attempt at customizing an initialization parameter file. By using the captured output as a starting point, you eliminate typing errors.
Note: A misspelled entry in the initialization parameter file will prevent the database from starting up.
To start recording your actions while in Server Manager, do the following:
  1. Enter the command spool file_name where the filename conforms to the rules of your computer.
    Oracle redisplays the SVRMGR> prompt after the spool file is opened.
  2. Go about your job in Server Manager, and then close the file by entering the spool off command; Oracle closes the output file and redisplays the Server Manager prompt.

I'm using following set of commands:
SQL> Spool on
SQL> Spool c:\spooltext.txt
SQL> Query
SQL> Spool off
Please guide me as to what steps needs to be executed to avoid column headers.

SQL> Spool on
SQL> set heading off <===
SQL> Spool c:\spooltext.txt
SQL> Query
SQL> Spool off


Apache server Bounch in R12

How to bounce Apache, clear HTML cache and have updated Java code class active in R12 Guidance or Training for Oracle Apps, OBIEE 11g, .NET and Java

To clear HTML cache and bounce Apache at R12 level, firstly go to the scripts directory of R12:
cd $INST_TOP/admin/scripts

To clear HTML cache :
The _pages directory under $COMMON_TOP should not be cleared for E-Biz R12, which we used to do in 11i applications. For clearing middle tier cache in R12,

Navigate to : -
Functional Administrator responsibility > Core Services > Caching Framework > Global Configuration > Clear cache

Instead of clearing the _pages in R12 creates blank login page issue, as in R12 the jsp files does not get compiled automatically.
cd $INST_TOP/admin/scripts
./ stop
./ start

At R12 level bouncing apache command is different of the one to bounce oacore oc4j engine and others.
An example of situation is: how to make a modification to a JSP and have that modification recognized by the OC4J - JSP engine.

In Oracle Apps 11i  the apache start and stop command, also bounce the JVM running on JSERV and clear caches by doings as below:-

To clear the jsp & modplsql caches run this command: -
rm -rf $OA_HTML/_pages/*
rm -rf $COMMON_TOP/_pages/*
rm -rf $IAS_ORACLE_HOME/Apache/modplsql/cache/*

But in Oracle EBS R12 the command to stop and start apache only takes action on Apache web server and not on the OC4J container that executes the JVM.

To bounce the OC4J container there is an specific script:

Therefore when making a modification to a JAVA class in R12 for the web application, in order to have that change recognized, it is needed to bounce

OC4J using script.
cd $INST_TOP/admin/scripts
./ stop
./ start
Bouncing oacore OC4J engine may also be needed if you make modifications to system properties.
Example for JTT based applications:  jtt system properties. 

$INST_TOP/admin/scripts/ restart
You can use these steps to bounce apache in R12

1) login to your instance with putti or any other software


3) stop

4) stop

5) start

6) start

Every time you bounce apache you need to follow these steps in same sequence.

Try This i am sure this will work..

Sunday, 27 January 2013


Patch is a program to fix a particular problem or enhance/add a particular feature in existing program/product/software.

In Oracle to apply database patch we use ‘opatch‘ ($ORACLE_HOME/OPatch) where as in order to apply applications patch we use ad utility ‘adpatch‘ ($AD_TOP/bin).

 Before going on steps for patching just a brief on Maintenance Mode, Maintenance Mode is mode of operation introduced with AD.I.2, in which the oracle application system is made accessible only for patching activities. Greatly improves performance by minimizing downtime.
If you wish to apply patch without putting applications in maintenance mode (for small patches) use options=hotpatch with adpatch.

Here I am explaining basic steps that are performed in patching :

STEP 1 :Before applying a patch you must check whether the patch is already there or not. For this we query the database:
*sqlplus apps/<apps password>@<tnsalias>   then
select * from AD_BUGS where bug_number=’<patch number>’

STEP 2 : Download the patch.
*login to oracle metalink.(
*Select the patches option then select the search type.
*Query for patch by writing the patch no. & platform on which you want to download the patch.
*Click download .
If you have downloaded the patch at desktop then move it to directory where you want it to unzip.
mv <> <destination patch directory>
eg:  mv /u1/apps/patch

STEP 3 :Unzip the patch. For this
*log in as application tier user.
*Go to the directory where you have your patch directory & type command-  
This will unzip the patch in current directory & will make the required patch directories & sub directories.

STEP 4 :Enable the Maintenance Mode.For This:
*Set the environment file located in APPL_TOP.
*Run the ad administration utility by typing adadmin on unix/linux console. It will ask questions related to admin utility with default answers in brackets.Then it shows following options & ask for the choice:
1.Generate applications file menu.
2.Maintain applications file menu.
3.Compile/Reload Applications Database Entities Menu.
4.Maintain Applications Database Entities Menu.
5.Change Maintenance Mode.
6.Exit ad Administration.

Select option 5. The status of maintenance mode is displayed at the top of change maintenance mode menu.Again it will show following options & ask for choice:
1.Enable Maintenance mode.
2.Disable Maintenance mode.
3.Return to Main Menu.
Select option 1. Then return to console.

STEP 5: Run autopatch from the patch directory by entering the following command:
$ adpatch
After this answer the questions of autopatch. As autopatch finishes its tasks, it writes timing information to the AD timing report for jobs running in parallel (if any ) and reminds you to run the log files for any errors.
If you don’t see the “autopatch is complete” message at the end of the Autopatch log file, Autopatch did not complete successfully.
The most important step after autopatch completes is to check the log files for any errors that may occurred during the patching process. Check the main Autopatch log file first,then additional log files as necessary.

The default name of main autopatch log file is adpatch.log .The file is located in
$APPL_TOP/admin/<SID>/logQuestion for readers :
What is lgi file in log location (above directory)

HOT Patch

APPS - how to apply patch HOT (without enabling maintenance mode)

Use the following option when calling adpatch:



adpatch defaultsfile=/u02/app/applmgr/11.5/admin/SIDNAME/def.txt \
logfile=all_5162862.log \
patchtop=/copy/APPS_PATCHES/2006/002/5162862 \
driver=u5162862.drv \
workers=4 \
interactive=yes \

In the above example we will:
apply patch# 5162862
using u5162862.drv driver
using 4 workers
in interactive mode
without validating pre-reqs
without enabling maintenance mode (hotpatch)

you can safely drop defaultsfile from the call if you don't have one created

Thursday, 24 January 2013


FRM-41050 and FRM-40352 encountered when querying on lines in Prepare Mass Additions (FAXMADDS) [ID 1495508.1]

Applies to:

Oracle Assets - Version 12.1.3 and later
Information in this document applies to any platform.


When attempting to query lines in the Prepare Mass Additions, the following error occurs.

FRM-41050: You cannot update this record pops up repeatedly and after many of these messages one more error is being thrown which is FRM-40352: Last record of query retrieved and the the application closes.


Cause is due to an old version of the form.

As per diagnostics:
FAXMADDS.fmb 120.133.12010000.18


1.  Apply Patch 14620236:R12.FA.B.

2.  Confirm the following file versions:
  FAXMADDS 120.133.12010000.50


FRM-92102 "A Network Error Occurred" in A Proxy Or Firewall Scenario [ID 312517.1]

Applies to:

Oracle Forms - Version and later
Information in this document applies to any platform.
Checked for relevance on 11-Aug-2012.


When a site has one or more proxies sitting between client and Forms server in a WAN or VPN configuration, Forms may fail intermittently with FRM-92102. With no proxies the Forms sessions run fine.


New installation or change in proxy configuration


Whenever proxies or firewalls are involved and FRM-92102 occurs, it is always useful to generate a Forms Listener Servlet trace of the good and the bad scenarios, and compare the HTTP headers sent in both instances. In this specific case, the proxy
1) Drops the keep-alive header
2) Changes the PRAGMA header by appending ", no-cache" at the end. So "pragma: -7" becomes "pragma: -7, no-cache".

The trace log shows that the form is terminated after a negative pragma header is received which has ", no-cache" appended. This is known; Forms WILL fail when a negative pragma has a no-cache modifier. The next POST conversation in the trace file will indicate failure.


Contact the team who is responsible for configuring the proxy, and request to modify that the
KEEP-ALIVE and PRAGMA headers are left alone.

Regenerate the trace for the previously bad scenario, and compare and verify that NO HTTP headers are dropped or modified.


Applies to:

Oracle Human Resources - Version: and later   [Release: 11.5.10 and later ]
Information in this document applies to any platform.


Assignment screen >
attempt to select Supervisor name and receive error

FRM-40212: Invalid value for field SUPERVISOR_NAME

Bug 11704323 had previously been opened for this customer for same error with FIND_PERSON.

Customer was advised to apply Patch 7292975 and this patch did fix the
issue with the Find Person screen.

This patch did not resolve the same problem occuring within the Supervisor
Name field on the Assignment screen.


This issue arose in after upgrading to database version


=== ODM Cause Determination ===



Apply RDBMS Patch 11706826 - MERGE REQUEST ON TOP OF FOR BUGS 10149223 10405897


Receiving Transaction Processor Error: RVTTH 448: Subroutine Error - When Receiving an Expense Item [ID 338270.1]

Applies to:

Oracle Inventory Management - Version 11.5.10 to 11.5.10.CU2 [Release 11.5 to 11.5.10]
Information in this document applies to any platform.
EXECUTABLE:RVCTP - Receiving Transaction Processor


The receiving transaction processor does not error when receiving inventory Purchase Order.

Unable to perform receiving for expense Purchase Order (PO), after saving find the receiving transaction process errors out with the following error.

RVTTH-448: Subroutine Error at: 40 Error calling
PO_MRC_ENGINE_GRP.Maintain_MRC_DataUser-Defined Exception in Package
RCV_CreateAccounting_PVT Procedure Insert_SubLedgerLines returned error
Cause: Subroutine Error at: 40 Error call

Steps to Reproduce
1. Search on the PO number in the Enter Receipts form
2. Input receiving headers details.
3. Check line to receipt.
4. Click save.
5. Go to View>Request to check on the receiving transaction processor


 When creating receiving subledger row for an expense PO for reporting SOB, the MRC API passed the
wrong conversion rate type when getting the exchange rate.

The fix is included in the following files:
POXVMCIB.pls version 115.6.11510.2
POXVMCIS.pls version 115.0.11510.2

A patch was released for this bug, but other patches with higher file versions conflicted with this fix, so the recommended patch for this issue provides higher versions of the files that include all fixes.


  1. Apply Patch  5616611
  2. Verify files are updated to the following versions:
    POXVMCIB.pls version 115.13
    POXVMCIS.pls version 115.1


ORA-20001: APP-FND-01728: An Assignment does not exist for these parameters [ID 556269.1]

Applies to:

Oracle Order Management - Version: to - Release: 11.5 to 11.5
Information in this document applies to any platform.
***Checked for relevance on 31-OCT-2011***


On in Production:
When attempting to create new order transaction type the following error occurs.

ORA-20001: APP-FND-01728: An Assignment does not exist for these parameters

The issue can be reproduced at will with the following steps:
1. Create a new Transaction Type:
         Order Management > Setup > Transaction Type > Define
2. Create a new sequence. OM > Setup> Documents >Define
3. Make sure the category exists (same name as order transaction type) OM>setup>Docs>Categories
4. Assign the sequence to category. OM > Setup> Documents >Assign
5. Use the category in a new sales order

Due to this issue, users cannot assign new order numbers for the new order type


 The 'Sequence Assignment' operation was done specifying a Set of Books which did not relate to the Operating Unit which was automatically in force (because of the Responsibility selected) when the Document Type was created.


You can use the following SQL*Plus query (run as the 'APPS' username) to verify which Set of Books should have been used when assigning the Sequence:
First, navigate to 'Order Management > Setup > Transaction Types > Define', and query the relevant Transaction Type.
Use 'Help > Diagnostics > Examine' to establish the value of ORG_ID.
Then navigate to 'Order Management > Setup > Documents > Assign' and query for the Category which has the same name as the above Transaction Type.
Use 'Help > Diagnostics > Examine' to establish the value(s) of SET_OF_BOOKS_ID used to link Document Sequences to this Category

Then run :
select p.org_id, p.set_of_books_id,
from ar_system_parameters_all p, gl_sets_of_books s
where p.set_of_books_id = s.set_of_books_id
  and org_id = &Org_ID_established_above;

If the Set of Books returned by the above script is not one of the ones which appear in the Sequence Assignment form, then that Assignment can not be seen from the Operating Unit which you are using.


When upgrading from 11i to 12.1.3 (via 12.1.1 version), ORA-06512: at "APPS.FA_SLA_CURRENT_PERIOD_UPG_PKG`occurs on package FACPUPG.SQL [ID 1454636.1]

Applies to:

Oracle Assets - Version 12.1.3 and later
Information in this document applies to any platform.


Upgrading from to 12.1.3 (via 12.1.1 first), the following SQL errors out /opt/vision/app/finhr12/apps/apps_st/appl/fa/12.0.0/patch/115/sql/facpupg.sql with:
ERROR at line 1:
ORA-06510: PL/SQL: unhandled user-defined exception
ORA-06512: at line 103


On the lower version of FAPUPGB.pls (120.23.12010000.4.1201010.3), it was missing additional parameters in the FAXEXP and FA_BOOKS_PKG.UPDATE_ROW functions and more specifically, the parameters
X_mrc_sob_type_code => l_mrc_sob_type_code,              <--- the additional parameter

x_impairment_exp      => l_impairment_exp,                            <--- the additional parameter:


1. Apply Patch 12537420 and pre-requisites
    If password is requried, please log an SR to obtain.

2. Confirm the following file versions:
       FACPUPGB.pls     120.23.12010000.7


Applies to:

Oracle Server - Enterprise Edition - Version and later
Information in this document applies to any platform.


During DataPump export or import you receive the below error messages:
ORA-39002: invalid operation
ORA-39070: Unable to open the log file.
ORA-29283: invalid file operation
ORA-06512: at "SYS.UTL_FILE", line 536
ORA-29283: invalid file operation


1. One of the reason this problem usually can occurs when the listener process has not been started under the same account as the database instance service. The listener forks the new server process and when this runs under a different security context as the database, then access to directories and files are likely impacted.
Please verify the following information:
1) the output of:
ps -ef | grep SMON
2) the output of:
ps -ef | grep tnslsnr
3) the output of:
ps -ef|grep LIST
4) the output of:
ls -ld <full directory name of the directory to which the export/import is written>
2. This issue can occur in RAC. DatatPump export/import runs fine without the connect string.
expdp system/xxx directory=xxxxxxx dumpfile=xxxxxx.dmp logfile= xxxxxx.log schemas=xxxx

Export: Release - 64bit Production on Tuesday, 08 March, 2011 11:15:2 3

Copyright (c) 2003, 2007, Oracle. All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_SCHEMA_03": system/******** directory=xxxxx dumpfile=xxxxx.dmp logfile= xxxxxx.log schemas=xxxxxxx
Estimate in progress using BLOCKS method...
Total estimation using BLOCKS method: 3.25 MB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE
. . exported "xxxxxx"."xxxxxxxxxxxx" 1.122 MB 1156 rows
. . exported "xxxxxx"."xxxxxxxxxxxx" 131.1 KB 3980 rows
. . exported "xxxxxx"."xxxxxxxxxxxx" 7.710 KB 1 rows
. . exported "xxxxxx"."xxxxxxxxxxxx" 6.054 KB 5 rows
. . exported "xxxxxx"."xxxxxxxxxxxx" 5.5 KB 2 rows
. . exported "xxxxxx"."xxxxxxxxxxxx" 5.492 KB 1 rows
Master table "SYSTEM"."SYS_EXPORT_SCHEMA_03" successfully loaded/unloaded
Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_03 is:
Job "SYSTEM"."SYS_EXPORT_SCHEMA_03" successfully completed at 11:15:41
But with the connect strings, it fails with the below error message
expdp system/xxxx@db directory=xxxxxxxxxx dumpfile=xxxxxxxx.dmp logfile= xxxxxxx.log schemas=xxxxxx

Export: Release - 64bit Production on Tuesday, 08 March, 2011 11:16:2 0

Copyright (c) 2003, 2007, Oracle. All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options
ORA-39002: invalid operation
ORA-39070: Unable to open the log file.
ORA-29283: invalid file operation
ORA-06512: at "SYS.UTL_FILE", line 536
ORA-29283: invalid file operation
Note : Both commands ran on the same server.
The reason can be because the connect string (TNS Name) is a load balancing connect string and whenever you try to use the connect string with expdp/impdp, it goes to the other node where the directory information is not available, or the directory might be a local folder which is not a shared one.
3. For single node or RAC instance (this should fail with both connect string and without connect string) because the folder path or the folder does not exist .
4. Directory path/folder exist but create directory is executed by a different user in the database and the import is run by a different user.
Run the below query to cross verify who owns the directory
set pages 999 lines 200
select * from dba_directories where directory_name = '< directory_name >';


1. Make sure the listener and instance services are started by the same account
2. Make sure that the directory are shared between nodes so that the directory can be accessed on any instance, or, create a folder similar to the other nodes locally, if there is already a folder created locally on the all the node with the same file directory path structure check if the permission are correct.
3. Make sure the folder exist has specified in during creation in the "CREATE DIRECTORY" syntax command.
4. Grant the required permission to the importing user to use the directory.
grant read, write on directory ,directory_name> to <username>;
If above four cause is not applicable at your end , then please check if the user has proper permission to export to run utl_file package.

FRM 41830

FRM 41830: While Drilling Down To Planning Workbench Form Application Dashboard [ID 1333288.1]

Applies to:

Oracle Advanced Planning Command Center - Version 12.1.3 and later
Information in this document applies to any platform.


Frm 41830: List of value contains no values, while drilling down to planning workbench from APPC dashboard

Drilling from the Demand Exceptions Summary, Late Replenishment for Sales Order, Exception by Item, click on item link opens ASCP, but does not drill directly to the Item or Exception. It instead drills to a “Find Exceptions” window.


"Service Supply Chain Analyst", "Supply Chain Analyst", "Sales and Operations Planning Analyst", "Supply Chain Risk Analyst" responsibilities do not contain any organizations,


Navigate to "Advanced Planning Administrator" responsibility - organization security screen and add required organizations to above mentioned OBIEE dashboard responsibilities.


Click to add to Favorites

 Applies to:

Oracle Financials for the Americas - Version 12.0.6 and later
Information in this document applies to any platform.


On : 12.0.6 version, Oracle Applications Tax Module

Attempting to open the List of Values of Product Fiscal Classification on Tax Details on Transactions Form and receive the following error:

FRM-41830: List of Values contains no entries

Steps to Reproduce:
Responsibility:  Receivables Brazil
Navigation:  Transactions > Transactions
Enter a new transaction
Enter Line Items
Click Tax Information
Try open list values in the Product Fiscal Classification field
 ... Get the error


The issue is caused because the "Item Categories" flexfield has incorrect Title on "FISCAL_CLASSIFICATION"
The form code internally reviews "Fiscal Classification" instead of "FISCAL_CLASSIFICATION"


To implement the solution, please execute the following steps:

Responsibility:  Receivables
Navigation:  Setup > Financials > Flexfields > Key > Segments
Query by Flexfield Title = "Item Categories"
Look for a Structure code = "FISCAL_CLASSIFICATION"
Change the Title from "FISCAL_CLASSIFICATION" to "Fiscal Classification"
  ---->> Note: it's just the title you need to change
Save the changes and the flexfield will be compiled automatically
  ---->> (In case it is not compiling, you need to unfreeze, do the change, save, freeze and compile the flexfield again)

Migrate the solution as appropriate to other environments.

Monday, 21 January 2013

Oracle 10g Installation & Configuration On Solaris

Oracle 10g Installation & Configuration On Solaris Platform
Step 1
Install the Operating System
Note: No specified operating system patches are required with Solaris 10 OS.

Step 2
Make sure that following software packages has been installed.

Note: We can verify that packages are installed or not by using following command: $pkginfo -i.

Step 3Check following executable file must be presents in /usr/ccs/bin

Step 4Checks swap space. Swap space should be 512MB or Twice the size of RAM. Use following command to know about Physical Memory and Swap space:

$ /usr/sbin/prtconf grep size
$ /usr/sbin/swap –l

Step 5Need at least 400 MB of free space in /tmp directory.

Step 6
Set following kernel parameter in /etc/system file and reboot the server.

Set shmsys:shminfo_shmmax=4294967295
Set shmsys:shminfo_shmmni=100
Set semsys:seminfo_semmsl=256
Set semsys:seminfo_semmni=100

Step 7
Create a group.

$ groupadd –g 300 dba *
$ groupadd –g 301 oinstall **

* “DBA” group will be use by the Oracle Software Owner and Database Administrators.

** “OINSTALL” group will be use when you installing multiple copies of the oracle software on one server and you will want some logins to be able to log onto some databases with DBA privileges but not others.

Step 8Create a Unix user that will be the Oracle Software Owner.

$ useradd –c ‘Oracle Software Owner’ –d /oracle –g oinstall –G dba –m –u 300 –s /usr/bin/ksh oracle

$passwd oracle

Step 9Create directory for oracle software and Database.

$ mkdir /oracle/app /oracle/oradata
$chown oracle:dba /oracle
$chmod 775 /oracle

Step 10
Create the /var/opt/oracle directory

$mkdir /var/opt/oracle
$chown oracle:dba /var/opt/oracle
$chmod 755 /var/opt/oracle

Step 11edit .profile file and type following endearments
export ORACLE_BASE=/oracle

Step 12
Set X window enveiroment.

Log in as a root with CDE (Common Desktop Environment Session)
$ DISPLAY=:0.0
$export DISPLAY
$xhost +
$ su – oracle
$export DISPLAY

Step 13

Log in as a ORACLE user and execute run installer.


We will wait through the installer prompts one at the time.


APP-SQLAP-10380 You Cannot Select this Payment Document

Step 1. Check that there are no locks on the payment batch

SELECT a.object_id, a.session_id, b.object_name FROM v$locked_object a, dba_objects b WHERE a.object_id = b.object_id

This select should return one row displaying AP_CHECK_STOCKS_ALL.

Step 2. Get the user's session ID and serial number.

SELECT sid, serial# from v$session WHERE sid = ;

Step 3. Once you have the 2 pieces of information from above, run the following:

ALTER system kill session 'sid, serial#'


ERROR : ORA-20100: File o0013319.tmp creation for FND_FILE failed

When attempting to run a concurrent request, the following error occurs in the log file:
ORA-20100: File o0013319.tmp creation for FND_FILE failed.
You will find more information on the cause of the error in request log.
Error: function fa_asset_trace_pkg.save_output returned failure
This error is caused by a concurrent manager setup and privilege related issue in the $APPLPTMP environment variable and UTL_FILE_DIR database parameter.

NOTE: Applications also produces temporary PL/SQL output files used in concurrent processing. These files are written to a location on the database server node specified by the APPLPTMP environment setting. The APPLPTMP directory must be the same directory as specified by the utl_file_dir parameter in your database initialization file.

Step 1 Make sure that APPLPTMP is set to a valid directory and that BOTH the applmgr user and the database user have read and write permissions on that directory (normally, it can be set to the same directory as APPLTMP).

Step 2 Make sure the directory pointed by APPLPTMP is the first entry on the utl_file_dir. Also, verify that all the entries on the utl_file_dir are valid and that the applmgr has read/write permissions.

Step 3 Check application user able to creat file in above directory if no then go to next step.

Step 4 Grant all users read/write permissions to that directory:

chmod 777 /usr/tmp

Step 5 Check DB user can write a file directly using FND_FILE, which is the package used by the Application. From sqlplus, connected as the apps user, run:


This should dump a file on APPLPTMP.

Performance Tuning

Performance Tuning Guide-

Digging SQL performance issue by using SQL Advisor

Step 1: Find CPU Load average history

cd /var/log/sa
sar -q -f sa10 | less

Step 2: Generate ADDM report.


Step 3: Find SQL ID which is using high CPU

Step 4: creating the tuning task

set serveroutput on
  l_sql_tune_task_id  varchar2(100);
  l_sql_tune_task_id := dbms_sqltune.create_tuning_task (
                          sql_id      => '',
                          scope       => dbms_sqltune.scope_comprehensive,
                          time_limit  => 7200,
                          task_name   => '',
                          description => 'tuning task for statement your_sql_id.');
  dbms_output.put_line('l_sql_tune_task_id: ' || l_sql_tune_task_id);

Step 5: Executing the tuning task

exec dbms_sqltune.execute_tuning_task(task_name => '

Step 6: Displaying the recommendations

set long 100000;
set longchunksize 1000
set pagesize 10000
set linesize 100
select dbms_sqltune.report_tuning_task('') as recommendations from dual;


How to Tune Segment if ADDM report reported that segment is responsible for significant “User I/O”


Run "Segment Advisor" on TABLE "AR.AR_PAYMENT_SCHEDULES_ALL" with object ID 28886.
Step 1: Creating the tuning task and Execute.

variable id number;
  name varchar2(100);
  descr varchar2(500);
  obj_id number;
  descr:='Segment Advisor Example';

  dbms_advisor.create_task (
    advisor_name     => 'Segment Advisor',
    task_id          => :id,
    task_name        => name,
    task_desc        => descr);

  dbms_advisor.create_object (
    task_name        => name,
    object_type      => 'TABLE',
    attr1            => '',
    attr2            => '',
    attr3            => NULL,
    attr4            => NULL,
    attr5            => NULL,
    object_id        => obj_id);

    task_name        => name,
    parameter        => 'recommend_all',
    value            => 'TRUE');


Step 2: Check Task status.

select task_name, status from dba_advisor_tasks where owner = 'SYS' and advisor_name = 'Segment Advisor' and task_name='';

Step 3: Displaying the recommendations

select af.more_info, af.task_name, ao.attr2 segname, ao.attr3 partition, ao.type, af.message
from dba_advisor_findings af, dba_advisor_objects ao
where ao.task_id = af.task_id
and ao.object_id = af.object_id
and ao.owner = 'SYS' and af.task_name='';

Step 4: If advisory show message:

The free space in the object is less than the size of the last extent.

Then follow bellow Action Plan:

A)    Take the backup of objects
B)     Backup the scripts by using TOAD
C)    Check Function based index on objects if exist the copy scripts.
D)    After copy scripts, drop the function based index on objects
E)     Enable row movement on objects

SQL> alter table . enable row movement;

F)     Shrink the space

SQL> alter table .
shrink space;

G)    Create the function based index on table by executing copied create index scripts.


ORA-32700: error occurred in DIAG Group Service

After applying patch set 7 on existing oracle ( to upgrade to The “Ora-32700 error occurred in DIAG Group Service “ occurred at the time of startup database with migrate option.
SQL> startup migrate
ORACLE instance started.
Total System Global Area  581505840 bytes
Fixed Size                   451376 bytes
Variable Size             402653184 bytes
Database Buffers          167772160 bytes
Redo Buffers               10629120 bytes
ORA-32700: error occurred in DIAG Group Service
=======End Of Error=========
The region behind this :-

The Oracle Universal Installer (OUI) detects this and assumes RAC is being installed, even if it is not.

To Resolve the problem follow these steps to turn RAC off
$ cd $ORACLE_HOME/rdbms/lib 
$ make -f rac_off ioracle

AWR Report

Automatic Workload Repository (AWR) in Oracle

Oracle have provided many performance gathering and reporting tools over the years. Originally the UTLBSTAT/UTLESTAT scripts were used to monitor performance metrics. Oracle8i introduced the Statspack functionality which Oracle9i extended. In Oracle 10g statspack has evolved into the Automatic Workload Repository (AWR).The AWR is a repository of performance information collected by the database to aid in the tuning process for DBAs.

Oracle 10g uses a scheduled job, GATHER_STATS_JOB, to collect AWR statistics. This job is created, and enabled automatically when we create a new Oracle database. We can disable and enable the schedule job by following command:

we can disable this job by using the dbms_scheduler.disable procedure as below :

Exec dbms_scheduler.disable(’GATHER_STATS_JOB’);

And we can enable the job using the dbms_scheduler.enable procedure as below :

Exec dbms_scheduler.enable(’GATHER_STATS_JOB’);

AWR consists of a collection of performance statistics including :
  • Wait events used to identify performance problems.
  • Time model statistics indicating the amount of DB time associated with a process from the v$sess_time_model and v$sys_time_model views.
  • Active Session History (ASH) statistics from the v$active_session_history view.
  • Some system and session statistics from the v$sysstat and v$sesstat views.
  • Object usage statistics.
  • Resource intensive SQL and PL/SQL.

The resource intensive SQL and PL/SQL section of the report can be used to focus tuning efforts on those areas that will yield the greatest returns.  The statements are ordered by several criteria including :
  • SQL ordered by Elapsed Time
  • SQL ordered by CPU Time
  • SQL ordered by Gets
  • SQL ordered by Reads
  • SQL ordered by Executions
  • SQL ordered by Parse Calls
  • SQL ordered by Sharable Memory

Several of the automatic database tuning features require information from the AWR to function correctly, including:
  • Automatic Database Diagnostic Monitor
  • SQL Tuning Advisor
  • Undo Advisor
  • Segment Advisor

 How to generate AWR report ?  

There are two scripts that are provided by oracle to generate the AWR report. The scripts are available in the directory  $ORACLE_HOME\rdbms\admin. The two scripts are 

1.) awrrpt.sql   : If we have only One Oracle Database then run awrrpt.sql sql script.

2.) awrrpti.sql  : If we have more than One Oracle Instance (Like RAC) then run awrrpti.sql script so that we can particular instance for awr report creation.

By default snapshots of the relevant data are taken every hour and retained for 7 days. The default values for these settings can be altered using the below procedure :

    retention => 43200,        -- Minutes (= 30 Days). Current value retained if NULL.
    interval  => 15);          -- Minutes. Current value retained if NULL.
                  Here we have  alter the snapshot interval to 15min. It is recommended that 15 Minutes is enough in two snapshot for better performance bottleneck.

AWR using Enterprise Manager    :   The automated workload repository administration tasks have been included in Enterprise Manager. The "Automatic Workload Repository" page is accessed from the main page by clicking on the "Administration" link, then the "Workload Repository" link under the "Workload" section. The page allows us to modify AWR settings or manage snapshots without using the PL/SQL APIs.

Here is the Demo of the AWR report .

C:\>sqlplus sys/xxxx@orcl as sysdba
SQL*Plus: Release Production on Thu Jun 16 11:42:19 2011
Copyright (c) 1982, 2010, Oracle.  All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> @D:\app\Neerajs\product\11.2.0\dbhome_1\RDBMS\ADMIN\awrrpt.sql

Current Instance
   DB Id                       DB Name        Inst Num          Instance
-----------                      ------------           --------          ------------
 1281052636                ORCL                1                    orcl
Specify the Report Type
Would you like an HTML report, or a plain text report?
Enter 'html' for an HTML report, or 'text' for plain text
Defaults to 'html'
Enter value for report_type: HTML

Type Specified:  html
Instances in this Workload Repository schema
   DB Id                        Inst Num       DB Name        Instance           Host
------------                      --------           ------------        ------------        ------------
* 1281052636                1                   ORCL              orcl                xxxx

Using 1281052636 for database Id
Using          1 for instance number

Specify the number of days of snapshots to choose from
Entering the number of days (n) will result in the most recent
(n) days of snapshots being listed.  Pressing <return> without
specifying a number lists all completed snapshots.
Enter value for num_days:       (Press Enter to see all the snapshots)

Listing all Completed Snapshots
Instance     DB Name        Snap Id       Snap Started               Level
---------       ------------         ---------         ------------------              -----
orcl             ORCL                 1             08 Jun 2011 11:30          1
                                               3             08 Jun 2011 14:41         1
                                               4             08 Jun 2011 15:30         1
 Data is truncated
                                                120           16 Jun 2011 05:30       1
                                                121           16 Jun 2011 06:30       1
                                                122            16 Jun 2011 07:30      1
                                                123            16 Jun 2011 08:30      1
                                                124            16 Jun 2011 09:30      1
                                                125            16 Jun 2011 10:30      1
                                                126            16 Jun 2011 11:30      1

Specify the Begin and End Snapshot Ids
Enter value for begin_snap: 125
Begin Snapshot Id specified: 125

Enter value for end_snap: 126
End   Snapshot Id specified: 126
Specify the Report Name
The default report file name is awrrpt_1_125_126.html.  To use this name,
press <return> to continue, otherwise enter an alternative.

Enter value for report_name:          (Press enter if you want to use the above name)
Using the report name awrrpt_1_125_126.html
Report is truncated 
End of Report
Report written to awrrpt_1_125_126.html

In the above report the line which are shaded with red colour are the entered values when it prompts.

How To Understand AWR Report / Statspack Report ?

How To Understand AWR Report / Statspack Report

script is here $ORACLE_HOME\RDBMS\ADMIN\

execute like below

SQL*Plus: Release - Production on Sun Jul 10 14:37:04 2011

Copyright (c) 1982, 2007, Oracle. All rights reserved.

SQL> CONN sys@stlbas105 as sysdba
Enter password:
SQL> @G:\app\Administrator\product\11.1.0\db_1\RDBMS\ADMIN\awrrpt.sql

Current Instance

DB Id DB Name Inst Num Instance
----------- ------------ -------- ------------
2515622958 STLBAS 1 stlbas

Specify the Report Type
Would you like an HTML report, or a plain text report?
Enter 'html' for an HTML report, or 'text' for plain text
Defaults to 'html'
Enter value for report_type: text

Type Specified: text

Instances in this Workload Repository schema

DB Id Inst Num DB Name Instance Host
------------ -------- ------------ ------------ ------------
* 2515622958 1 STLBAS stlbas TESTSERVER

Using 2515622958 for database Id
Using 1 for instance number

Specify the number of days of snapshots to choose from
Entering the number of days (n) will result in the most recent
(n) days of snapshots being listed. Pressing without
specifying a number lists all completed snapshots.

Enter value for num_days:

This document shows you some basic checks to be done from AWR to identify the problem.

Older Database versions like 8i,9i have Statspack report only.

From 10g onwards, AWR is available along with Statspack.
Automatic Workload Repository (AWR) : The Automatic Workload Repository (AWR) provides information to different manageabilities components. AWR consists of two components: in-memory statistics accessible through V$ dynamic views, and AWR snapshots saved in the database that represent the persistent and historical portion.

AWR snapshots can be generated at will using the following syntax:

EXECUTE dbms_workload_repository.create_snapshot();

By default in 10g, the AWR snapshots are generated automatically on hourly basis.

If you are facing any performance problem in the database and you have license for AWR,
then AWR reports can be generated for the problem period.

If there is no proper license for AWR available then statspack report can be generated.

The AWR/Statspack report should be taken for the interval not more than 60 minutes during problem.

Please dont take AWR / Statspack report for duration of like five or six hours as that would not be reliable.

The AWR report can be taken in both html/text format.
1) The first thing to be checked in AWR report is the following:-

Snap Id Snap Time Sessions Cursors/Session
Begin Snap: 112 11-Jun-09 00:00:57 191 6.7
End Snap: 113 11-Jun-09 01:00:11 173 7.4
Elapsed: 59.23 (mins)
DB Time: 710.73 (mins)
Check the "DB Time" metric. If it is much higher than the elapsed time, then it indicates that the sessions are waiting for something.

Here in this example, the Elapsed Time is around 60 minutes while the DB Time is around 700 minutes. This means that 700 minutes of time is spent by the sessions on waiting.

2) Next thing to be looked is the following:-

Instance Efficiency Percentages (Target 100%)
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 98.67 In-memory Sort %: 100.00
Library Hit %: 98.60 Soft Parse %: 99.69
Execute to Parse %: 5.26 Latch Hit %: 99.31
Parse CPU to Parse Elapsd %: 12.78 %Non-Parse CPU: 99.10
As per the thumb rule, Instance Efficieny Percentages should be ideally above 90%.
3) Then comes the Shared Pool Statistics.
Shared Pool Statistics
Begin End
Memory Usage %: 85.49 80.93
% SQL with executions>1: 42.46 82.96
% Memory for SQL w/exec>1: 47.77 81.03
The memory usage statistics of shared pool is shown.
Idealy this should be lesser. If it is very high like beyond 90, this shows the contention
in the shared pool.

4) Next thing to be looked after is the Top 5 Timed Events table.
This shows the most significant waits contributing to the DB Time.

Top 5 Timed Events
Event Waits Time(s) Avg Wait(ms) % Total Call Time Wait Class
db file sequential read 4,076,086 28,532 7 66.9 User I/O
CPU time 11,214 26.3
Backup: sbtbackup 4 4,398 1,099,452 10.3 Administrative
log file sync 37,365 2,421 65 5.7 Commit
log file parallel write 37,928 1,371 36 3.2 System I/O
Here, the significant wait is the db file sequential read which contributes to 67% of DB Time.

5) Then , SQL Statistics can be checked.

SQL Statistics
SQL ordered by Elapsed Time
SQL ordered by CPU Time
SQL ordered by Gets
SQL ordered by Reads
SQL Statistics section would have commonly the above four sections.

Each section shows the list of SQLs based on the order of the respective metric.
For example, SQL ordered by Elapsed Time section shows the list of SQLs in the order
of the Elapsed Time. High resource consuming SQLs can be spotted out and meant for

Note: All the above four sections of SQL Statistics show the list of SQLs in descending order.
i.e, For ex: Highest elapsed time is shown as first.

6) Then comes the IO Stats section.

This shows the IO Statistics for each tablespaces in the database.
As the thumb rule, the Av Rd(ms) [Average Reads in milliseconds] should not cross beyond 30, add myself(not greater that 30)
which is considered to be IO bottleneck.

Tablespace IO Stats
ordered by IOs (Reads + Writes) desc
Tablespace Reads Av Reads/s Av Rd(ms) Av Blks/Rd Writes Av Writes/s Buffer Waits Av Buf Wt(ms)
TEMP 3,316,082 933 4.91 1.00 28,840 8 0 0.00
DAT1 520,120 146 16.06 1.21 185,846 52 902 13.00
DAT3 93,411 26 42.82 2.98 13,442 4 16 23.13
DAT2 98,171 28 91.97 7.97 5,333 2 325 34.89
In the above example, the Av Rd(ms) is high in all tablespaces indicating the IO contention.

7) Then , Advisory Statistics can be checked.

This section shows the following:-

Buffer Pool Advisory
PGA Aggr Summary
PGA Aggr Target Stats
PGA Aggr Target Histogram
PGA Memory Advisory
Shared Pool Advisory
SGA Target Advisory
Streams Pool Advisory
Java Pool Advisory
It is very commonly used to check the advisories for the most important SGA structures like shared pool, buffer cache etc and PGA.

8) Then finally, init.ora Parameters is shown which shows the list of parameters set at instance level.

init.ora Parameters
All the above said sections except the DB Time can be checked from Statspack report also.

The statspack snapshots are not generated automatically as in AWR.
It has to be generated during the problem period as follows:-

Take 2 snapshots between 60 minutes interval during the problem and generate the statspack report

exec statspack.snap
wait for 60 minutes
exec statspack.snap
Please run $ORACLE_HOME/rdbms/admin/spreport.sql
and specify BEGIN and END ID's of the snapshots taken during the problem.

The above said sections are the most common checks can be performed from user level.
Further intensive checking can be done through Oracle Support.