ORA-609 : OPIODR ABORTING PROCESS UNKNOWN OSPID

ORA-609 : OPIODR ABORTING PROCESS UNKNOWN OSPID
ORA-609 : opiodr aborting process unknown ospid

WARNING: inbound connection timed out (ORA-3136)


Cause:

The ORA-609 error is thrown when a client connection of any kind failed to complete or aborted the connection
process before the server process was completely spawned.
Beginning with 10gR2, a default value for inbound connect timeout has been set at 60 seconds.

This is also triggered, when a DB session is killed/aborted manually from the OS prompt.

Solution:

Increase the values for INBOUND_CONNECT_TIMEOUT at both listener and server side sqlnet.ora file as a preventive measure.If the problem  is due to connection timeouts,an increase in the following parameters should eliminate or reduce the occurrence of the ORA-609s.

Sqlnet.ora: SQLNET.INBOUND_CONNECT_TIMEOUT=180
Listener.ora: INBOUND_CONNECT_TIMEOUT_listener_name=120

ORA-20001: SYS_NTIJR7UZJGXWFGRAAUTZQHMQ== is an invalid identifier

When Using Gather Auto Option Errors - Is an Invalid Identifier (Doc ID 1363044.1)


APPLIES TO:

Oracle EBS Applications Performance - Version 12.1.2 to 12.1.3 [Release 12.1]
Information in this document applies to any platform.

SYMPTOMS

On Oracle Applications 12.1.2 version, when attempting to run concurrent program "Gather Schema Statistics" with
GATHER AUTO option, the following error occurs:

ORACLE error 20001 in FDPSTP

Cause: FDPSTP failed due to ORA-20001: SYS_NTIJR7UZJGXWFGRAAUTZQHMQ== is an invalid identifier
ORA-06512: at "APPS.FND_STATS", line 774
ORA-06512: at line 1
.

The SQL statement being executed at the time of the error was: a
+---------------------------------------------------------------------------+
Start of log messages from FND_FILE
+---------------------------------------------------------------------------+
In GATHER_SCHEMA_STATS , schema_name= ALL percent= 10 degree = 8 internal_flag= NOBACKUP
ORA-20001: SYS_NTIJR7UZJGXWFGRAAUTZQHMQ== is an invalid identifier
+---------------------------------------------------------------------------+
End of log messages from FND_FILE
+---------------------------------------------------------------------------+
CP-GENERIC ORACLE ERROR (ERROR=ORA-20001: SQL_PLSQL_ERROR: N, ROUTINE, MO_GLOBAL.INIT, N, ERRNO, -20001, N, REA)
(ROUTINE=FND_CONCURRENT.INIT_REQUEST)
Concurrent request completed


CAUSE

This is explained in the following bug:

Bug 9542112 - ERROR IN R12 FOR GATHER SCHEMA STATS WITH 'GATHER AUTO' OPTION.

As per bug 9542112, Issue is happening when running Gather schema stats with gather auto option.
actually in Gather auto option we have another cursor ( empty stats cursor)which selects tables with stale stats
( tables whose last analyzed date is NULL).

Empty cursor will be executed after completing gather auto option, that means stats will be gathered on all the  tables before going to empty cursor( where it is throwing this error). It will throw error when gathering stats  on tables with stale stats( that too only for tables which are not there in dba_tables).

SOLUTION

To implement the solution, please

1. Apply patch 9542112 ERROR IN R12 FOR GATHER SCHEMA STATS WITH GATHER AUTO OPTION to a non critical environment.

2. Retest the issue.

3. Check the file versions on file system and database

AFSTATSB.pls120.12.12010000.14

AFSTATSS.pls120.7.12010000.10


[appl@sujeet ~]$ sqlplus apps/****** 

SQL*Plus: Release 10.1.0.5.0 - Production on Wed Dec 13 05:01:49 2017

Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> select fnd_release.minor_version from dual; 

MINOR_VERSION
-------------
1

SQL> select fnd_release.major_version from dual; 

MAJOR_VERSION
-------------
12

SQL> !
[appl@sujeet ~]$ adident Header $FND_TOP/patch/115/sql/AFSTATSB.pls 
/apps/apps_st/appl/fnd/12.0.0/patch/115/sql/AFSTATSB.pls:
$Header AFSTATSB.pls 120.12.12010000.14 2009/12/17 11:59:49 msaleem ship $ 

[appl@sujeet ~]$ adident Header $FND_TOP/patch/115/sql/AFSTATSS.pls 

/apps/apps_st/appl/fnd/12.0.0/patch/115/sql/AFSTATSS.pls:
$Header AFSTATSS.pls 120.7.12010000.10 2009/12/16 12:32:58 msaleem ship $ 

4. Migrate the solution as appropriate to other environments.

REFERENCES

BUG:9542112 - ERROR IN R12 FOR GATHER SCHEMA STATS WITH GATHER AUTO OPTION

ORA-20005: object statistics are locked

ORA-20005 WHEN GATHERING STATISTICS FOR EBS APPS 11i or R12 IN Database 10G or 11G (Doc ID 375351.1)

APPLIES TO:
Oracle EBS Applications Performance - Version 11.5.10.2 to 12.1.3 [Release 11.5.10 to 12.1]
Information in this document applies to any platform.
10g or 11g Database with Apps 11i or R12


SYMPTOMS
Gathering statistics using concurrent jobs "Gather Schema Statistics / Gather Table Statistics" or
running FND_STATS fails with "ORA-20005: object statistics are locked".

Example 1:

Error #1: ERROR: While GATHER_TABLE_STATS:
object_name=AR.AR_REV_REC_QT***ORA-20005: object statistics are locked (stattype = ALL)***

Example 2:

In GATHER_SCHEMA_STATS , schema_name= ALL percent= 10 degree = 8 internal_flag= NOBACKUP stats on table GAT_REQ_QTBL is locked

CAUSE

This can happen with Advance Queue tables. In 10g, if a queue is created or recreated during the upgrade, automatic statistics
gather is locked (disabled) on these queue.

The following statement can be used to check the tables which have statistics locked:

    select owner, table_name, stattype_locked
    from dba_tab_statistics
    where stattype_locked is not null;

SOLUTION
Unlock statistics gathering on those queues running the commands below. To unlock all the tables in a schema at once:

exec dbms_stats.unlock_schema_stats('schema_owner');

OR

To unlock individual tables (need to run for all tables individually):

exec dbms_stats.unlock_table_stats('table_owner','table_name');

Examples:

SQL> exec dbms_stats.unlock_schema_stats ('AR');
SQL> exec dbms_stats.unlock_table_stats('AR', 'AR_REV_REC_QT');


NOTE:

1. For Applications Ebusiness Suite, stats SHOULD be collected in AQ tables.
 Therefore, stats should be unlocked in all AQ tables owned by any Apps schema

2. The current recommended version of FND_STATS  is provided in the patch

Patch 8452962 - FND_STATS SHOULD NOT GATHER STATS ON INTERNAL TEXT INDEX TABLES:$I, $R, $K,$N,$P

3. Is using EXEC FND_STATS.LOAD_XCLUD_TAB a valid workaround for this issue?

Although forcing the queue tables to be excluded will workaround the problem, it will eventually lead to performance
problems when those queue tables are used. The baseline support recommendation is that stats should be gathered on
 E-Business suite queue tables. Stats only normally become locked on E-Business suite queue tables when the queues
are dropped and recreated. This can happy manually via SQL and during patching. However the correct procedure here is
 to always unlock the stats in the queues are recreated. One will need to add this check to any post patching document you run onsite.

4. Can one force Gather Schema Statistics to continue if it hits an object with locked stats?

Forcing Gather Schema Statistics to continue if it hits an object with locked stats is not a good idea due to the performance
problems described in point 1. However as a DBA, it is ones database and ones choice about how to maintain your database.
 What one needs to do is monitor the amount of data being stored in the queue tables that are being locked.
If they consistently contain no rows then there is no use in including them in the GSS run so you could use the workaround above.

However the problem now is that GSS will fail if a NEW queue table has it's stats locks,
 this will lead one to a constant monitoring process until all of the queue tables have been excluded.
However for completeness, investigate the issue a little further and while it is possible to force
 DBMS_STATS.GATHER_SCHEMA_STATS to ignore locked stats by passing the "force=>TRUE" parameter.

How to restore dropped table in oracle 12c


Recovering a Dropped Table Using Oracle Flashback Drop 

Oracle Flashback Drop enables you to reverse the effects of dropping (deleting) a table, returning the dropped table to the database along with dependent objects such as indexes and triggers. This feature stores dropped objects in a recycle bin, from which they can be retrieved until the recycle bin is purged, either explicitly or because space is needed.

As with Flashback Table, you can use Flashback Drop while the database is open. Also, you can perform the flashback without undoing changes in objects not affected by the Flashback Drop operation. Flashback Table is more convenient than forms of media recovery that require taking the database offline and restoring files from backup.

Note:

For a table to be recoverable using Flashback Drop, it must reside in a locally managed tablespace. Also, you cannot recover tables in the SYSTEM tablespaces with Flashback Drop, regardless of the tablespace type.


Here is an example of recovering a dropped table with flashback.  
To recover the table we first check to see that it resides in the recyclebin,
and then we issue the "flashback table to before drop" command to recover the table:

SQL> drop table tarauni;

Table dropped.

SQL> SHOW RECYCLEBIN;

ORIGINAL NAME  RECYCLEBIN NAME                OBJECT TYPE   DROP TIME
-------------- ------------------------------ ------------  -----------
tarauni       BIN$ANbliLHaSiu02xI+zbvDvQ==$0             TABLE      2012-03-26:16:51:54

SQL> select original_name from dba_recyclebin;

ORIGINAL_NAME
--------------------------------
tarauni

SQL> flashback table tarauni to before drop;

Flashback complete.

SQL> select * from tarauni;

ID
----------
1000

SQL> select count(*) from BIHAR;

  COUNT(*)
----------
    132172

SQL> drop table BIHAR;

Table dropped.

SQL> CREATE table BIHAR;

Table dropped.

SQL> ALTER TABLE BIHAR RENAME TO BIHAR_bkp;

Table altered.

SQL> flashback table BIHAR to before drop;

Flashback complete.

SQL> select count(*) from BIHAR;

  COUNT(*)
----------
    132172


Now flashback table to time 08:00:00

SQL> SHOW RECYCLEBIN;

ORIGINAL NAME    RECYCLEBIN NAME                            OBJECT TYPE            DROP TIME
---------------- ------------------------------ ------------ -------------------
   BIHAR               BIN$X5W4sitiSZ3gUyqsxAob1Q==$0          TABLE                      2017-12-05:10:13:41


restore from recyclebin with drop time.

SQL> FLASHBACK TABLE BIHAR to timestamp TO_TIMESTAMP( ‘2017-12-05 10:13:41′,’YYYY-MM-DD HH24:MI:SS’);

Flashback complete.

SQL> select count(*) from BIHAR;

  COUNT(*)
----------
    132172

You cannot run any DML or DDL on dropped tables ….

SQL> Insert into tempp values(20);
Insert into tempp values(20)
*
ERROR at line 1:
ORA-00942: table or view does not exist

FLASHBACK TABLE TO A TIME IN THE PAST.

Firstly enable row movement for table test..

SQL> ALTER TABLE TEST ENABLE ROW MOVEMENT;
TIME : 08:00:00

SQL> select * from BIHAR;

SALARY
———-
5000

TIME :08:00:01

SQL> update BIHAR set salary =6000;
1 row updated.

SQL> select * from BIHAR;
SALARY
———-
6000

SQL> commit;
Commit complete.

Now flashback table to time 08:00:00

SQL> FLASHBACK TABLE BIHAR to timestamp TO_TIMESTAMP( ‘2005-09-13 08:00:00′,’YYYY-MM-DD HH24:MI:SS’);

Flashback complete.

SQL> SELECT * FROM BIHAR;
SALARY
———-
5000

Using Flashback Drop and Managing the Recycle Bin
When you drop a table, the database does not immediately remove the space associated with the table.
The database renames the table and places it and any associated objects in a recycle bin, where,
in case the table was dropped in error, it can be recovered at a later time. This feature is called Flashback Drop, and the FLASHBACK TABLE statement is used to restore the table. Before discussing the use of the FLASHBACK TABLE statement for this purpose, it is important to understand how the recycle bin works, and how you manage its contents.


What Is the Recycle Bin?
The recycle bin is actually a data dictionary table containing information about dropped objects.
 Dropped tables and any associated objects such as indexes, constraints, nested tables,
and the likes are not removed and still occupy space. They continue to count against user space quotas, until specifically purged from the recycle bin or the unlikely situation where they must be purged by the database because of tablespace space constraints.

Each user can be thought of as having his own recycle bin, since unless a user has the SYSDBA privilege, the only objects that the user has access to in the recycle bin are those that the user owns.
 A user can view his objects in the recycle bin using the following statement:

SELECT * FROM RECYCLEBIN;

When you drop a tablespace including its contents, the objects in the tablespace are not placed in the recycle bin and the database purges any entries in the recycle bin for objects located in the tablespace. The database also purges any recycle bin entries for objects in a tablespace when you drop the tablespace, not including contents, and the tablespace is otherwise empty.

Likewise:

When you drop a user, any objects belonging to the user are not placed in the recycle bin and any objects in the recycle bin are purged.

When you drop a cluster, its member tables are not placed in the recycle bin and any former member
 tables in the recycle bin are purged.

When you drop a type, any dependent objects such as subtypes are not placed in the recycle bin and
any former dependent objects in the recycle bin are purged.

Object Naming in the Recycle Bin

When a dropped table is moved to the recycle bin, the table and its associated objects are given
system-generated names. This is necessary to avoid name conflicts that may arise if multiple tables have the same name.

This could occur under the following circumstances:

A user drops a table, re-creates it with the same name, then drops it again.

Two users have tables with the same name, and both users drop their tables.

The renaming convention is as follows:

BIN$unique_id$version
where:

unique_id is a 26-character globally unique identifier for this object, which makes the recycle bin name unique across all databases

version is a version number assigned by the database

Enabling and Disabling the Recycle Bin

You can enable and disable the recycle bin with the recyclebin initialization parameter.
When the recycle bin is enabled, dropped tables and their dependent objects are placed in the recycle bin.
When the recycle bin is disabled, dropped tables and their dependent objects are not placed in the recycle bin;
they are just dropped, and you must use other means to recover them (such as recovering from backup).

The recycle bin is enabled by default.

To disable the recycle bin:

Issue one of the following statements:

ALTER SESSION SET recyclebin = OFF;

ALTER SYSTEM SET recyclebin = OFF;

To enable the recycle bin:

Issue one of the following statements:

ALTER SESSION SET recyclebin = ON;

ALTER SYSTEM SET recyclebin = ON;

Enabling and disabling the recycle bin with an ALTER SYSTEM or ALTER SESSION statement takes effect immediately.

 Disabling the recycle bin does not purge or otherwise affect objects already in the recycle bin.

Like any other initialization parameter, you can set the initial value of the recyclebin parameter in
 the text initialization file initSID.ora:

recyclebin=on


Viewing and Querying Objects in the Recycle Bin

Oracle Database provides two views for obtaining information about objects in the recycle bin:

View                                                      Description
USER_RECYCLEBIN This view can be used by users to see their own dropped objects in the recycle bin.  It has a synonym RECYCLEBIN, for ease of use.

DBA_RECYCLEBIN This view gives administrators visibility to all dropped objects in the recycle bin One use for these views is to identify the name that the database has assigned to a dropped object, as shown in the following example:

SELECT object_name, original_name FROM dba_recyclebin
   WHERE owner = 'HR';

OBJECT_NAME                    ORIGINAL_NAME
------------------------------ --------------------------------
BIN$yrMKlZaLMhfgNAgAIMenRA==$0 EMPLOYEES

You can also view the contents of the recycle bin using the SQL*Plus command SHOW RECYCLEBIN.

SQL> show recyclebin

ORIGINAL NAME    RECYCLEBIN NAME                OBJECT TYPE  DROP TIME
---------------- ------------------------------ ------------ -------------------
EMPLOYEES        BIN$yrMKlZaVMhfgNAgAIMenRA==$0 TABLE        2003-10-27:14:00:19

You can query objects that are in the recycle bin, just as you can query other objects. However,
 you must specify the name of the object as it is identified in the recycle bin. For example:

SELECT * FROM "BIN$yrMKlZaVMhfgNAgAIMenRA==$0";

Purging Objects in the Recycle Bin

If you decide that you are never going to restore an item from the recycle bin, you can use the PURGE statement to remove the items and their associated objects from the recycle bin and release their storage space.

You need the same privileges as if you were dropping the item.

When you use the PURGE statement to purge a table, you can use the name that the table is known by in the recycle bin or the original name of the table. The recycle bin name can be obtained from either the DBA_ or USER_RECYCLEBIN view as shown in "Viewing and Querying Objects in the Recycle Bin".

The following hypothetical example purges the table hr.int_admin_emp, which was renamed to BIN$jsleilx392mk2=293$0 when it was placed in the recycle bin:

PURGE TABLE BIN$jsleilx392mk2=293$0;

You can achieve the same result with the following statement:

PURGE TABLE int_admin_emp;

You can use the PURGE statement to purge all the objects in the recycle bin that are from a specified
 tablespace or only the tablespace objects belonging to a specified user, as shown in the following examples:

PURGE TABLESPACE example;
PURGE TABLESPACE example USER oe;
Users can purge the recycle bin of their own objects, and release space for objects,
by using the following statement:

PURGE RECYCLEBIN;

If you have the SYSDBA privilege, then you can purge the entire recycle bin by specifying DBA_RECYCLEBIN, instead of RECYCLEBIN in the previous statement.

You can also use the PURGE statement to purge an index from the recycle bin or to purge from the recycle bin all objects in a specified tablespace.


Restoring Tables from the Recycle Bin

Use the FLASHBACK TABLE ... TO BEFORE DROP statement to recover objects from the recycle bin. You can specify either the name of the table in the recycle bin or the original table name.
An optional RENAME TO clause lets you rename the table as you recover it.
The recycle bin name can be obtained from either the DBA_ or USER_RECYCLEBIN view as shown in "Viewing and Querying Objects in the Recycle Bin". To use the FLASHBACK TABLE ...
TO BEFORE DROP statement, you need the same privileges you need to drop the table.

The following example restores int_admin_emp table and assigns to it a new name:

FLASHBACK TABLE int_admin_emp TO BEFORE DROP RENAME TO int2_admin_emp;

The system-generated recycle bin name is very useful if you have dropped a table multiple times.
 For example, suppose you have three versions of the int2_admin_emp table in the recycle bin and
you want to recover the second version. You can do this by issuing two FLASHBACK TABLE statements, or you can query the recycle bin and then flashback to the appropriate system-generated name, as shown in the following example. Including the create time in the query can help you verify
that you are restoring the correct table.

SELECT object_name, original_name, createtime FROM recyclebin; 

OBJECT_NAME                    ORIGINAL_NAME   CREATETIME
------------------------------ --------------- -------------------
BIN$yrMKlZaLMhfgNAgAIMenRA==$0 INT2_ADMIN_EMP  2006-02-05:21:05:52
BIN$yrMKlZaVMhfgNAgAIMenRA==$0 INT2_ADMIN_EMP  2006-02-05:21:25:13
BIN$yrMKlZaQMhfgNAgAIMenRA==$0 INT2_ADMIN_EMP  2006-02-05:22:05:53

FLASHBACK TABLE BIN$yrMKlZaVMhfgNAgAIMenRA==$0 TO BEFORE DROP;

Restoring Dependent Objects

When you restore a table from the recycle bin, dependent objects such as indexes
 do not get their original names back; they retain their system-generated recycle bin names.
 You must manually rename dependent objects if you want to restore their original names.
If you plan to manually restore original names for dependent objects, ensure that you make
note of each dependent object's system-generated recycle bin name before you restore the table.

The following is an example of restoring the original names of some of the indexes of the dropped
table JOB_HISTORY, from the HR sample schema. The example assumes that you are logged in as the HR user.

After dropping JOB_HISTORY and before restoring it from the recycle bin, run the following query:

SELECT OBJECT_NAME, ORIGINAL_NAME, TYPE FROM RECYCLEBIN;

OBJECT_NAME                    ORIGINAL_NAME             TYPE
------------------------------ ------------------------- --------
BIN$DBo9UChtZSbgQFeMiAdCcQ==$0 JHIST_JOB_IX              INDEX
BIN$DBo9UChuZSbgQFeMiAdCcQ==$0 JHIST_EMPLOYEE_IX         INDEX
BIN$DBo9UChvZSbgQFeMiAdCcQ==$0 JHIST_DEPARTMENT_IX       INDEX
BIN$DBo9UChwZSbgQFeMiAdCcQ==$0 JHIST_EMP_ID_ST_DATE_PK   INDEX
BIN$DBo9UChxZSbgQFeMiAdCcQ==$0 JOB_HISTORY               TABLE


Restore the table with the following command:

FLASHBACK TABLE JOB_HISTORY TO BEFORE DROP;

Run the following query to verify that all JOB_HISTORY indexes retained their system-generated recycle bin names:

SELECT INDEX_NAME FROM USER_INDEXES WHERE TABLE_NAME = 'JOB_HISTORY';

INDEX_NAME
------------------------------
BIN$DBo9UChwZSbgQFeMiAdCcQ==$0
BIN$DBo9UChtZSbgQFeMiAdCcQ==$0
BIN$DBo9UChuZSbgQFeMiAdCcQ==$0
BIN$DBo9UChvZSbgQFeMiAdCcQ==$0

Restore the original names of the first two indexes as follows:

ALTER INDEX "BIN$DBo9UChtZSbgQFeMiAdCcQ==$0" RENAME TO JHIST_JOB_IX;
ALTER INDEX "BIN$DBo9UChuZSbgQFeMiAdCcQ==$0" RENAME TO JHIST_EMPLOYEE_IX;

Note that double quotes are required around the system-generated names.

ORA-01950: no privileges on tablespace 'SYSTEM'



ORA-01950: no privileges on tablespace "string"
 
Cause: User does not have privileges to allocate an extent in the specified tablespace.Action: Grant the user the appropriate system privileges or grant the user space resource on the tablespace.



Solution :-

  • ALTER USER <username> QUOTA 100M ON <tablespace name>
     
  • GRANT UNLIMITED TABLESPACE TO <username>
And to also make sure the user has been granted Connect, Resources roles incase the user was not given Create table privileges.


ORA-1652: unable to extend temp segment by 128 in tablespace TEMP2



**********************************************************************
Following errors written to the Alert log file. Please verify
ORA-1652: unable to extend temp segment by 128 in tablespace                 TEMP2
ORA-1652: unable to extend temp segment by 128 in tablespace                 TEMP2
ORA-1652: unable to extend temp segment by 128 in tablespace                 TEMP2

**********************************************************************

Solution
you would just add disk to TEMP to avoid the ORA-1652 error, but you can also wait for SMON to clean-up the TEMP segment.

This basically means that free space from other instances is being requested, and typically signifies that there is instance contention.  Instance contention within the temporary space can make the instance take more time to process.

In sever cases, a slowdown may occur, in which you might want try one of the following work-arounds:
  1. Increase size of the temp tablespace
  2. Increase sort_area_size and/or pga_aggregate_target

Identify temporary datafile details :

SQL> select file_name , TABLESPACE_NAME from DBA_TEMP_FILES;

FILE_NAME                                     TABLESPACE_NAME

/apps/db/apps_st/data/temp01.dbf    TEMP1
/apps/db/apps_st/data/temp02.dbf         TEMP2

SQL> SELECT TABLESPACE_NAME,TOTAL_BLOCKS,USED_BLOCKS,FREE_BLOCKS FROM v$sort_segment;

TABLESPACE_NAME TOTAL_BLOCKS USED_BLOCKS FREE_BLOCKS

TEMP2 524160 3456 520704
TEMP 0 0 0
TEMP1 524160 512 523648



Add additional temp file in temporary table-space
SQL> ALTER TABLESPACE TEMP1 ADD TEMPFILE ‘/u01/app/oracle/oradata/ iamdb/temp03.dbf’ size 1024m;

Check if there is any space available in temporary table-space (segment)

SELECT   A.tablespace_name tablespace, D.mb_total,
SUM (A.used_blocks * D.block_size) / 1024 / 1024 mb_used,
D.mb_total -SUM (A.used_blocks * D.block_size) / 1024 / 1024 mb_free
FROM     v$sort_segment A,
(
SELECT   B.name, C.block_size, SUM (C.bytes) / 1024 / 1024 mb_total
FROM     v$tablespace B, v$tempfile C
WHERE    B.ts#= C.ts#
GROUP BY B.name, C.block_size
) D
WHERE    A.tablespace_name = D.name
GROUP by A.tablespace_name, D.mb_total;

TABLESPACE   MB_TOTAL MB_USED   MB_FREE

TEMP1             4096                   6.625           4089.375
TEMP2             4096                     43             4053


Monitoring Temporary Tablespaces and Sorting
Unlike datafiles, tempfiles are not listed in V$DATAFILE and DBA_DATA_FILES. Use V$TEMPFILE and DBA_TEMP_FILES instead.
One can monitor temporary segments from V$SORT_SEGMENT and V$SORT_USAGE
DBA_FREE_SPACE does not record free space for temporary tablespaces. Use V$TEMP_SPACE_HEADER instead:

SQL> select TABLESPACE_NAME, BYTES_USED, BYTES_FREE from $TEMP_SPACE_HEADER;

TABLESPACE_NAME BYTES_USED BYTES_FREE

TEMP1                          4294967296            0
TEMP2                          4294967296            0

Root Cause Analysis

Identify temp segment usages per session


Temp segment usage per session.

SQL > SELECT S.sid || ',' || S.serial# sid_serial, S.username, S.osuser, P.spid, S.module,
P.program, -SUM (T.blocks) * TBS.block_size / 1024 / 1024 mb_used, T.tablespace,
COUNT(*) statements
FROM     v$sort_usage T, v$session S, dba_tablespaces TBS, v$process P
WHERE    T.session_addr = S.saddr
AND      S.paddr = P.addr
AND      T.tablespace = TBS.tablespace_name
GROUP BY S.sid, S.serial#, S.username, S.osuser, P.spid, S.module,
P.program, TBS.block_size, T.tablespace
ORDER BY mb_used;

SID_SERIAL USERNAME OSUSER SPID MODULE PROGRAM MB_USED TABLESPACE STATEMENTS




Identify temp segment usages per statement

 Temp segment usage per statement.

SQL> SELECT  S.sid || ',' || S.serial# sid_serial, S.username, Q.hash_value, Q.sql_text,
T.blocks * TBS.block_size / 1024 / 1024 mb_used, T.tablespace
FROM    v$sort_usage T, v$session S, v$sqlarea Q, dba_tablespaces TBS
WHERE   T.session_addr = S.saddr
AND     T.sqladdr = Q.address
AND     T.tablespace = TBS.tablespace_name
ORDER BY mb_used;

SID_SERIAL USERNAME HASH_VALUE SQL_TEXT MB_USED TABLESPACE



SQL > select * from v$sort_segment;

TABLESPACE_NAME SEGMENT_FILE SEGMENT_BLOCK EXTENT_SIZE CURRENT_USERS TOTAL_EXTENTS TOTAL_BLOCKS USED_EXTENTS USED_BLOCKS FREE_EXTENTS FREE_BLOCKS ADDED_EXTENTS EXTENT_HITS FREED_EXTENTS FREE_REQUESTS MAX_SIZE MAX_BLOCKS MAX_USED_SIZE MAX_USED_BLOCKS MAX_SORT_SIZE MAX_SORT_BLOCKS RELATIVE_FNO

TEMP2 0 0 128 38 4095 524160 38 4864 4057 519296 0 4980625 0 0 4095 524160 4095 524160 4059 519552 0
TEMP 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1024
TEMP1 0 0 16 37 32760 524160 37 592 32723 523568 0 26657530 0 0 32760 524160 31266 500256 31191 499056 0


Each tablespace has one or more datafiles that it uses to store data.

The max size of a datafile depends on the block size of the database. I believe that, by default, that leaves with you with a max of 32gb per datafile.

To find out if the actual limit is 32gb, run the following:

sql > select value from v$parameter where name = 'db_block_size';

Compare the result you get with the first column below, and that will indicate what your max datafile size is.

I have Oracle Personal Edition 11g r2 and in a default install it had an 8,192 block size (32gb per data file).


Dropping / Recreating Temporary Tablespace Method

Keep in mind that the procedures documented here for dropping and recreating your temporary tablespace should be performed during off hours with no users logged on performing work.

If you are working with a temporary tablespace in Oracle8i or a temporary tablespace in Oracle9i that is NOT the default temporary tablespace for the database, this process is straight forward. Simply drop and recreate the temporary tablespace:


SQL> DROP TABLESPACE temp INCLUDING CONTENTS AND DATAFILES;

Tablespace dropped.

SQL> DROP TABLESPACE temp;

Tablespace dropped.

SQL> CREATE TEMPORARY TABLESPACE TEMP
  2  TEMPFILE '/u02/oradata/TESTDB/temp01.dbf' SIZE 500M REUSE
  3  AUTOEXTEND ON NEXT 100M MAXSIZE unlimited
  4  EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M;

Tablespace created.



SQL> ALTER DATABASE DEFAULT TEMPORARY TABLESPACE temp;

Database altered.

Create & drop TEMP TABLE-space in 11gR1

While doing this activity, existing temporary tablespace may have existing live sessions, 
due to same oracle won’t let us to drop existing temporary tablespace. Resulting,
 we need to kill existing session before dropping temporary tablespace.

Following query will give you tablespace name and datafile name along with path of that 
data file.

SQL> select FILE_NAME,TABLESPACE_NAME from dba_temp_files;

Following query will create temp tablespace named: ‘TEMP_NEW’ with 500 MB size along with auto-extend and maxsize unlimited.

SQL> CREATE TEMPORARY TABLESPACE TEMP_NEW TEMPFILE '/DATA/database/prod/temp_01.dbf' SIZE 500m autoextend on next 10m maxsize unlimited;

Following query will help you to alter database for default temporary tablespace. 
( i.e. Newly created temp tablespce: ‘TEMP_NEW’ )

SQL> ALTER DATABASE DEFAULT TEMPORARY TABLESPACE TEMP_NEW;

Retrieve ‘SID_NUMBER’ & ‘SERIAL#NUMBER’ of existing live session’s who are using old 
temporary tablespace ( i.e. TEMP ) and kill them.

SQL> SELECT b.tablespace,b.segfile#,b.segblk#,b.blocks,a.sid,a.serial#,
a.username,a.osuser, a.status
FROM v$session a,v$sort_usage b
WHERE a.saddr = b.session_addr;

Provide above inputs to following query, and kill session’s.

SQL> alter system kill session 'SID_NUMBER, SERIAL#NUMBER';

Example :-

SQL> alter system kill session '59,57391';

Now, we can drop old temporary tablespace without any trouble with following:

SQL> DROP TABLESPACE old_temp_tablespace including contents and datafiles;

Contents and datafiles are deleted successfully.

If you wish to continue with old temporary tablespace name, i.e. 
‘TEMP’ then follow same step mentioned above to recreate temp tablespace with old name.

ORA-20005: object statistics are locked (stattype = ALL)

ORA-20005 WHEN GATHERING STATISTICS FOR EBS APPS 11i or R12 IN Database 10G or 11G (Doc ID 375351.1)

ERROR:

FNDGSCST module: Gather Schema Statistics
+---------------------------------------------------------------------------+

Current system time is 25-NOV-2017 21:00:22

+---------------------------------------------------------------------------+

**Starts**25-NOV-2017 21:00:22
ORACLE error 20005 in FDPSTP

Cause: FDPSTP failed due to ORA-20005: object statistics are locked (stattype = ALL)
ORA-06512: at "APPS.FND_STATS", line 780
ORA-06512: at line 1



APPLIES TO:

Oracle EBS Applications Performance - Version 11.5.10.2 to 12.1.3 [Release 11.5.10 to 12.1]
Information in this document applies to any platform.
10g or 11g Database with Apps 11i or R12


SYMPTOMS

Gathering statistics using concurrent jobs "Gather Schema Statistics / Gather Table Statistics" or running FND_STATS fails with "ORA-20005: object statistics are locked".

Example 1:

Error #1: ERROR: While GATHER_TABLE_STATS:
object_name=AR.AR_REV_REC_QT***ORA-20005: object statistics are locked (stattype = ALL)***

Example 2:

In GATHER_SCHEMA_STATS , schema_name= ALL percent= 10 degree = 8 internal_flag= NOBACKUP stats on table GAT_REQ_QTBL is locked

CAUSE

This can happen with Advance Queue tables. In 10g, if a queue is created or recreated during the upgrade,
automatic statistics gather is locked (disabled) on these queue.

The following statement can be used to check the tables which have statistics locked:
[oracle@sujeet ~]$ sqlplus / as sysdba


SQL> select owner, table_name, stattype_locked
    from dba_tab_statistics
    where stattype_locked is not null;



SOLUTION

Unlock statistics gathering on those queues running the commands below. To unlock all the tables in a schema at once:

exec dbms_stats.unlock_schema_stats('schema_owner');

OR

To unlock individual tables (need to run for all tables individually):

exec dbms_stats.unlock_table_stats('table_owner','table_name');

Examples:

SQL> exec dbms_stats.unlock_schema_stats ('AR');
SQL> exec dbms_stats.unlock_table_stats('AR', 'AR_REV_REC_QT');


SQL> exec dbms_stats.unlock_schema_stats('SYSTEM');

PL/SQL procedure successfully completed.

SQL> exec dbms_stats.unlock_schema_stats('APPLSYS');

PL/SQL procedure successfully completed.


SQL> exec dbms_stats.unlock_schema_stats('SYS');

PL/SQL procedure successfully completed.


Internal Server Error

Intermittent Login issue R12.1 - 500 Internal Server Error

Problem Description:

Users are unable to get application login page.
Users are frequently getting below error while accessing the EBS application R12.1.3 URL.

Internal Server Error

The server encountered an internal error or misconfiguration and was unable to complete your request.
Please contact the server administrator, ohs_admin@acs.net and inform them of the time the error occurred, and anything you might have done that may have caused the error.
More information about this error may be available in the server error log.

Cause:

This issue occurred due to low oacore heap size.

Solution:

Increase the oacore heap size and restart application services.

Change the parameters in $ORA_CONFIG_HOME/10.1.3/opmn/conf/opmn.xml
of both form nodes as shown below:

From:
-Xmx512M -Xms128M -XX:MaxPermSize=160M

To:
-Xmx1024M -Xms128M -XX:MaxPermSize=160M


Reference:

JVM: Guidelines to setup the Java Virtual Machine in Apps E-Business Suite 11i and R12 (Doc ID 362851.1)


How to run Gather Statistics for a Schema in R12.1.3

How to run Gather Schema Statistics concurrent program

1. Log on to Oracle Applications with
    Responsibility = System Administrator

2. Submit Request Window
    Navigate to: Concurrent > Requests

3. Query for the Gather Schema Statistics

4. Enter the appropriate parameters. This can be run for specific schemas by specifying the schema name or entering  ‘ALL’  to gather statistics for every schema in the database

5. Submit the Gather Schema Statistics program


Parameters :

Schema Name:
  Schema for which statistics are to be gathered. Specify ALL for all Oracle Applications schemas

Percent:
  The sampling percentage. If left blank, the default value of 10 is used. The valid range is from 0 to 100

Degree: 
 The degree of parallelism to be used for gathering statistics. If a Degree is not provided, it defaults to the minimum of parallel_max_servers and cpu_count.

Backup Flag:  NO BACKUP is used,
 then the GATHER_SCHEMA_STATS procedure will not backup the current statistics. This way the GATHER_SCHEMA_STATS procedure will run faster.

Restart Request ID: 
 In the case where the Gather Schema Statistics run fails due to whatever reasons, the concurrent request can be re-submitted and it will pick up where the failed run left off, if you provide the concurrent request_id of the failed run.
History Mode:  Last Run – History records for each object are maintained only for the last gather statistics run. Each subsequent run will overwrite the previous history record for the object. This is the default behavior

Gather Options:  
GATHER: All tables and indexes of the schema schema name are selected for stats gathering. This is the default

Gather Options This parameter specifies how objects are selected for statistics gathering.

GATHER : All tables and indexes of the schema schemaname are selected for stats gathering. This is the default.

GATHER AUTO : Tables of the schema schemaname for which the percentage of modifications has exceeded modpercent are selected for statistics gathering.
Indexes of these tables are selected by default. Table monitoring needs to be enabled before using this option.

GATHER EMPTY : Statistics are gathered only for tables and indexes that are missing statistics.

LIST AUTO : This option does not gather statistics. It only provides a listing of all the tables that will be selected for statistic gathering,
if the GATHER AUTO option is used.

LIST EMPTY : This option does not gather statistics. It only provides a listing of all the tables that will be selected for statistics gathering,
if the GATHER EMPTY option is used.

Modifications Threshold:  Applicable only to GATHER AUTO and LIST AUTO Options

Invalidate Dependent Cursors:  This flag indicates whether cursors dependent on the table being analyzed should be invalidated or not. By default, dependent cursors are invalidated.

How to check network latency in oracle R12.1.3


1. Navigate System Administrator Responsibility.
2. Navigate to Application
     Click on Network Test.




3. Click the ‘Clear Old Test Data’ button to purge previous test results from the database.

4. Specify the number of Trials and the Iterations for each trial for both the latency and bandwidth blocks. The default settings for both are 5 trials of 100 iterations each.

5. Select the Run Test button to perform the test.

Latency test:- 
It examines the time taken for a single packet to make a round trip from the client side application to the server.

 Bandwidth test:-
It examines the data rate to see how many bytes per second the network can transfer from the server to the client.




Note :- Oracle Recommends Network latency should be low and bandwidth should be high.

ORA-32004

Facing below issue on Oracle 12c database.

SQL> startup
ORA-32004: obsolete or deprecated parameter(s) specified for RDBMS instance 
ORACLE instance started.

solution:-

Down Database.
SQL> shut immediate
Database closed.
Database dismounted.
ORACLE instance shut down.

Edit PFILE:-

Go to PFILE location.
[sujeet@oracle ~]$ cd $ORACLE_HOME/dbs


[sujeet@oracle dbs]$ vi initORCL.ora

commit below parameter.
#sec_case_sensitive_logon = FALSE

save pfile.

start database with pfile.

SQL> startup pfile=/apps/dba/db/tech_st/12.1.0.2/dbs/initORCL.ora
ORACLE instance started.

Total System Global Area 2147483648 bytes
Fixed Size                  2926472 bytes
Variable Size             872417400 bytes
Database Buffers         1258291200 bytes
Redo Buffers               13848576 bytes
Database mounted.
Database opened.

SQL> create spfile from pfile;

File created.


SQL> shut immediate
Database closed.
Database dismounted.
ORACLE instance shut down


SQL> startup
ORACLE instance started.

Total System Global Area 2147483648 bytes
Fixed Size                  2926472 bytes
Variable Size             872417400 bytes
Database Buffers         1258291200 bytes
Redo Buffers               13848576 bytes
Database mounted.
Database opened.

Issue resolved.


Upgrade Timezone version 10 to 18 in oracle

Upgrading timezone_file version manually during Oracle 12c upgrade

Some of us might have faced a situation to upgrade timezone_file version manually during Oracle 12c upgrade. Here are the steps to follow in Windows.

When you upgrade Oracle to 121020, you will have have an option in DBUA to upgrade the timezone data.





If you don’t check it, then, when you run “post_upgrade_checks.sql”, it will give you the below warning message.

**********************************************************************
Check Tag:     OLD_TIME_ZONES_EXIST
Check Summary: Check for use of older timezone data file
Fix Summary:   Update the timezone using the DBMS_DST package after upgrade is complete.
**********************************************************************
Fixup Returned Information:
INFORMATION: --> Older Timezone in use
     Database is using a time zone file older than version 18.
     After the upgrade, it is recommended that DBMS_DST package
     be used to upgrade the 12.1.0.2.0 database time zone version
     to the latest version which comes with the new release.
     Please refer to My Oracle Support note number 977512.1 for details.

**********************************************************************

upgrade it manually

Upgrade Time zone version 10 to 18.

SQL> SELECT version FROM v$timezone_file;
  
  VERSION
----------
        10

SQL> SELECT PROPERTY_NAME, SUBSTR(property_value, 1, 30) value
FROM DATABASE_PROPERTIES
WHERE PROPERTY_NAME LIKE 'DST_%'
ORDER BY PROPERTY_NAME;

PROPERTY_NAME                                                             VALUE
------------------------------------------------------------------ ------------------------------
DST_PRIMARY_TT_VERSION                                             10
DST_SECONDARY_TT_VERSION                                        0
DST_UPGRADE_STATE                                                       NONE

SQL> shut immediate
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup upgrade
ORACLE instance started.
Total System Global Area 2147483648 bytes
Fixed Size                  2926472 bytes
Variable Size             889194616 bytes
Database Buffers         1241513984 bytes
Redo Buffers               13848576 bytes
Database mounted.
Database opened.

SQL> EXEC DBMS_DST.BEGIN_UPGRADE (18);

PL/SQL procedure successfully completed.

SQL> shut immediate

SQL> startup

SQL> SELECT version FROM v$timezone_file;

   VERSION
----------
        18


Enable Trace for Concurrent Programs in Oracle Apps R12.1.3

Tracing the Concurrent program is very good option to Debug any issue while running the concurrent program.


Agenda :

 To enable the Trace for Concurrent Programs
 To Debug the issues in Concurrent Programs or its dependent objects.

Steps :

 1. Goto Concurrent Program Definition

Application Developer –> Concurrent –> Program –> check Enable Trace

save it.

Query your concurrent program and check the Enable Trace check box at the bottom of the screen.

2. Set the Profile Concurrent: Allow Debugging to YES.

save it.

3. Navigate to the Responsibility from where you are running the program.

4. Before Submitting the Concurrent Program in Submit Request screen there will be an option like Debug Options click on that button.

5. In the Debug Options select the SQL Trace and enable Trace with binds.

6. Submit the Concurrent Program and note down the Request id.

7.Trace file name will always be post fixed with oracle_process_id.To get the oracle_process_id use the below SQL
 select oracle_process_id from fnd_concurrent_requests where request_id= Request_id;

8.And the Trace file path can be derived using the below

SQL> Select * from v$parameter where name=’user_dump_dest’;

Trace file location:-
/apps/sonu/db/tech_st/11.1.0/admin/SONU_oracle/diag/rdbms/sonu/SONU/trace

How ro convert .trc to .out file :-
tkprof ACSDEV_ora_15529_SEKARS.trc ACSDEV_ora_15529_SEKARS.out  explain=apps/apps  sys=no sort=PRSDSK,EXEDSK,FCHDSK,EXECPU,FCHCPU


9. Get the trace file to your local machine.Understanding the raw trace is very complex so use
TKPROF utility to make the Trace file readable.

10. Open the Command Prompt and run the below command
 TKPROF < Trace File_Name.trc> <Output_File_Name.out> SORT=fchela

11.A new outfile will be generated with the name given in the above command.
Analyse the Output file to know the answers for your problem.


Sql queries to find the log files by Request id


select req.oracle_process_id tracename, req.request_id, req.requested_by,usr.user_name,
prog.user_concurrent_program_name, req.request_date, req.phase_code, req.status_code, req.logfile_name ,
req.outfile_name, dest.value as user_dump_dest from apps. fnd_concurrent_requests req, gv$parameter dest,
apps.fnd_concurrent_programs_vl prog, apps.fnd_user usr

where  req. concurrent_program_id = prog.concurrent_program_id

and req.requested_by = usr.user_id

–and request_id like '17352717'

and dest.name= 'user_dump_dest'

and request_id like & Request_id;


Sql query find the log files by username


select req.oracle_process_id tracename, req.request_id, req.requested_by,usr.user_name,
prog.user_concurrent_program_name, req.request_date,  req.phase_code, req.status_code, req.logfile_name ,
req.outfile_name, dest.value as user_dump_dest

from apps. fnd_concurrent_requests req, gv$parameter dest, apps.fnd_concurrent_programs_vl prog, apps.fnd_user usr

where  req. concurrent_program_id = prog.concurrent_program_id

and req.requested_by = usr.user_id

–and request_id like '17352717'

and dest.name= ‘user_dump_dest’

and usr.user_name like ‘sujeet%jha%’

order by request_date desc



SQL> select oracle_process_id from fnd_concurrent_requests where request_id = &Request_id;

 SQL> select * from apps. fnd_concurrent_requests req,gv$parameter dest , apps .fnd_concurrent_programs_vl prog
where  req. concurrent_program_id = prog.concurrent_program_id and request_id like '17352717' and dest.name= ‘user_dump_dest’

How to add employee detail in oracle apps R12

Employee creation and user mapping
Employee creation, Employee role assignments and user mapping:

Issue reason:
       
Some times we are getting permission issue while accessing some of the module/responsibility like
 'iReceivable, Service, Credit management Credit analyst' etc. For that we have to create new employee,import that employee,
 and assign roles (Not mandatory) finally map that employee into Application login user "Person field".

Creating an employee
An employee is the most important entity in an organization. Before creating an employee, the HR officer must know the date from which the employee will be active in the organization. In Oracle terminology, you can call it the employee's hire date. Apart from this, the HR officer must know basic details of the employee such as first name, last name, date of birth, and so on.

Navigation: Global HRMS Manager -->People -->Enter and Maintain.





Select new and enter employee details like name,gender from action tab select "Create Employment
" option for employee creation. And enter identification employee number and save it , then enter the address details.





 Import the Employee:

CRM Administrator --> Resource Manager -->import Resource

From report category page enter your employee name and search employee.
 After finding employee select the employee and click start import button. and click ok Finally select save resource button.

You want to assign any role for this employee means click details button and assign the required roles.
(Credit analyst, credit manager, collection agent, collection manager etc..)

Employee Mapping:

If employee creation process is done, map this employee with application user.
Navigation:  System administrator -->Security --> user --> Define

Query the application user and select this employee in person field, and save it.  

ORA-08004: sequence IEX_DEL_BUFFERS_S.NEXTVAL exceeds MAXVALUE

 Error:- IEX: Scoring Engine Harness Error - ORA-08004: sequence IEX_DEL_BUFFERS_S.NEXTVAL exceeds MAXVALUE (Doc ID 2056754.1) SYMPTOMS:- Yo...