ORA-08004: sequence IEX_DEL_BUFFERS_S.NEXTVAL exceeds MAXVALUE

 Error:-

IEX: Scoring Engine Harness Error - ORA-08004: sequence IEX_DEL_BUFFERS_S.NEXTVAL exceeds MAXVALUE (Doc ID 2056754.1)


SYMPTOMS:-

You may have noticed that IEX_DELINQUENCIES_ALL is not getting updated with closed delinquencies or new delinquencies on past due invoices when the IEX: 

Scoring Engine Harness or IEX: Delinquency Management processes run.  You have confirmed that you are not putting in a value for the Maximum Number of Records parameter on the IEX: Scoring Engine Harness, however no delinquencies are being created.

On further investigation, even though IEX: Scoring Engine Harness shows successful the following message can be seen in the log -

ERROR IN CONCUR: IEX_SCORE: storeDelBuffest: UnExpected Error ORA-08004: sequence IEX_DEL_BUFFERS_S.NEXTVAL exceeds MAXVALUE and cannot be instatiated


CAUSE:-

The IEX_DELINQUENCIES_ALL table has run out of sequence numbers and you can no longer create new records.

Bug 21812472

SOLUTION:-

To resolve the issue, the following SQL will recreate the sequence IEX_DEL_BUFFERS_S with NO Max Value:


set serveroutput on;

declare

BEGIN

execute immediate ('drop sequence IEX.iex_del_buffers_s');

execute immediate ('create sequence IEX.IEX_DEL_BUFFERS_S START WITH 10000 '

  || 'MINVALUE 1 INCREMENT BY 1 CACHE 100 NOORDER ');

 DBMS_OUTPUT.put_line('New Sequence for IEX_DEL_BUFFERS_S ' || sqlerrm);

EXCEPTION

  WHEN OTHERS then

  DBMS_OUTPUT.put_line('Sequence creation failed IEX_DEL_BUFFERS_S ' ||

sqlerrm);

end;

commit;


Output like below:-

New Sequence for IEX_DEL_BUFFERS_S ORA-0000: normal, successful completion

PL/SQL procedure successfully completed.

Commit complete.


-bash: dropdb: command not found in Postgresql

 In PostgreSQL, the “psql command not found” error or the “psql” is not recognized as an internal or external command arises 

Because of the following reasons:

- Postgres is not installed on the Machine.

- The Path for Postgres tools is not set on our system.

The stated error can be fixed either by installing PostgreSQL or by setting the environment variable for the Postgres tools.

[UAT@postgres ~]$ dropdb UATDB;

-bash: dropdb: command not found

Maybe Your $PATH variable doesn't contain the default paths? echo $PATH should give something like:
# echo $PATH
/usr/bin:/bin:/opt/post/bin:/opt/post/sbin

# locate dropdb

/usr/local/pgsql/bin/dropdb

You may set the PATH variable with this command

# export PATH=/usr/local/pgsql/bin:$PATH

[UAT@postgres ~]$ dropdb UATDB;


Its working now.




Oracle 19c Interview Questions and Answers.

 


1. What is Oracle 19c?

Oracle 19c is a relational database management system (RDBMS) from Oracle Corporation. 

It is the long-term support release (LTSR) of the Oracle Database 12c Release 2 (12.2) line. 

Oracle 19c is available on-premises and in the cloud.


2. Are there any specific new features that can be used in the context of a database upgrade to Oracle 19c?

Yes, there are a few new features that can be used when upgrading to Oracle 19c. One is the new Oracle Upgrade Assistant, 

which can help automate the process of upgrading your database. Another is the new Oracle Database Migration Assistant,

 which can help with migrating data to Oracle 19c. Finally, the new Oracle SQL Developer Migration Workbench can help with migrating database 

objects and data.


3. Is there anything special about upgrading from Oracle 12c to 19c as opposed to other versions?

Yes, there are a few things to keep in mind when upgrading to Oracle 19c. First, the Oracle 19c database is only 

compatible with Oracle Linux 7.3 or higher.

 Second, the Oracle 19c database is only compatible with the Oracle 19c client. Finally, when upgrading from Oracle 12c to 19c, 

you will need to export your data from the 12c database, then import it into the 19c database.


4. Is it possible to upgrade an existing 11gR2 or 12c non-container database (CDB) to Oracle 19c? If yes, how?

Yes, it is possible to upgrade an existing 11gR2 or 12c non-container database (CDB) to Oracle 19c. 

You can do this by using the Oracle Database Upgrade Assistant (DBUA).


5. In what situations would you use a container database architecture for your production databases?

Container database architectures are becoming increasingly popular for production databases, as they offer a number of advantages. 

One key advantage is that they allow for much easier management of multiple databases, as all of the databases can be managed from a single console. 

This can save a lot of time and effort, particularly for larger organizations with many databases. Additionally, 

container databases can offer better performance and scalability than traditional database architectures.


6. What is a pluggable database?

A pluggable database is a database that is contained within another database, known as the container database. 

The pluggable database shares certain resources with the container database, but it can also be configured and administered independently. 

This allows for greater flexibility and scalability when managing multiple databases.


7. Does Oracle 19c support using multiple PDBs on the same server?

Yes, Oracle 19c does support using multiple PDBs on the same server. 

This allows for greater flexibility and scalability when deploying Oracle databases.


8. What are some key features of Autonomous Database Cloud Service?

Some key features of Autonomous Database Cloud Service include:

– The ability to automatically provision and manage Oracle databases in the cloud

– The ability to elastically scale compute and storage resources as needed

– The ability to patch and upgrade databases automatically

– The ability to monitor and manage databases using the Oracle Cloud Infrastructure Console


9. Can you explain what SQL Plan Management is and why we need it?

SQL Plan Management is a feature in Oracle 19c that allows the database to automatically manage SQL execution plans. 

This can help improve performance by ensuring that the optimizer always has access to the most up-to-date execution plans. 

SQL Plan Management can also help to prevent plan regressions by automatically reverting to a previous plan if a new one is found to be sub-optimal.

10. What are the main benefits provided by Oracle Machine Learning?

Oracle Machine Learning provides a number of benefits, including the ability to automatically generate models from data, 

the ability to deploy models in a production environment, and the ability to monitor and manage models. Additionally, 

Oracle Machine Learning provides a number of tools that make it easier to work with data, including data preparation, 

feature selection, and model evaluation.


11. What’s the difference between Oracle Advanced Security Option and Oracle Data Redaction?

Oracle Advanced Security Option provides a comprehensive set of tools for securing data in transit and at rest, including encryption, 

data masking, and access control. Oracle Data Redaction, on the other hand, is a feature that allows you to selectively redact sensitive 

data from your database, making it unreadable to unauthorized users.


12. How do you ensure data security when storing sensitive information in a database like Oracle 19c?

When storing sensitive information in a database like Oracle 19c, you need to ensure that the data is encrypted. You can do this by using a 

tool like Oracle Advanced Security. This will encrypt the data both at rest and in transit, making it much more difficult for unauthorized 

individuals to access it.


13. What is the best way to secure passwords stored in Oracle 19c?

The best way to secure passwords stored in Oracle 19c is to use the Oracle Password Vault. The Password Vault is a secure, centralized 

repository for storing and managing passwords. It provides a single point of control for managing passwords, and it ensures that all passwords 

are stored in a secure, encrypted format.


14. How can you prevent unauthorized users from accessing sensitive information in Oracle 19c?

You can prevent unauthorized users from accessing sensitive information in Oracle 19c by using the Data Redaction feature. This feature allows 

you to specify which columns of data should be redacted, and how the data should be redacted. You can also specify which users should have 

access to the redacted data.


15. What is the importance of the Oracle wallet feature?

The Oracle wallet feature is important because it allows for the secure storage of sensitive information, such as passwords, 

within the database. This means that the information is less likely to be compromised if the database is breached.


16. What is Oracle Real Application Clusters? Why should we use it?

Oracle Real Application Clusters (RAC) is a software feature that allows Oracle databases to be deployed on a cluster of servers. 

This provides high availability and scalability for Oracle databases. RAC can be used to improve the performance of Oracle databases, 

and it can also be used to provide failover protection in the event of a server failure.


17. What are the advantages of using Oracle Enterprise Manager?

Oracle Enterprise Manager provides a comprehensive and centralized solution for managing Oracle databases, middleware, and applications. 

It offers a complete set of tools for managing and monitoring all aspects of your IT infrastructure, including:

– Provisioning and patching

– Configuration management

– Performance monitoring

– Capacity planning

– Security and compliance

– Backup and recovery

Oracle Enterprise Manager also provides a self-service portal for users, so they can request and provision new resources, 

as well as monitor the status of their existing resources.

18. Do you think there will ever be a situation where a business won’t need a data warehouse?

I don’t think so. Data warehouses will always be necessary to store and analyze large amounts of data.

19. What’s your opinion on Oracle Exadata? When should it be used?

Oracle Exadata is a powerful tool that can be used to improve the performance of Oracle databases. It should be used in situations where the 

database is expected to experience high demand or where the database is particularly large.


20. What are the differences between Oracle Exadata and Amazon Web Services RDS?

Oracle Exadata is a cloud-based platform that provides customers with a complete Oracle database solution, 

while Amazon Web Services RDS is a cloud-based platform that provides customers with a relational database service. 

Oracle Exadata includes all of the software and hardware necessary to run an Oracle database, 

while Amazon Web Services RDS only provides the software necessary to run a relational database. 

Oracle Exadata also provides customers with a complete set of tools for managing and monitoring their Oracle database, 

while Amazon Web Services RDS only provides a limited set of tools for managing and monitoring their relational database.

Top Oracle DBA, RAC, And Performance Tuning Questions


Q #1) Why do we use the materialized view instead of a table or views?

Answer: Materialized view is a database object that holds query results. If materialized views are used instead of tables or views in complex query executions, performance gets enhanced as re-execution is not required for repeated queries.


Q #2) How is the Clustered Index different from the Non-Clustered Index?

Answer: An index is a schema object, which can search the data efficiently within the table.

Indexes can be clustered or non-clustered. Differences include:

In a clustered index, table records are sorted physically and stored in a particular order. Hence, a table can have a single clustered index only.

 While in a non-clustered index, logical sorting happens which does not match the physical order of the records.

The leaf node of a clustered index holds the data pages while the non-clustered index holds the index rows.


Q #3) What are the different responsibilities of a DBA?

Answer: DBA is the database administrator who performs all administrative tasks.

Administrative tasks include:

User-level administration i.e. creates users, removes existing users, or modifies user permissions.

Maintains database security.

Manages database storage & objects.

Tunes the performance of a database.

Performs backups & recovery tasks.


Q #4) What do you mean by Database Normalization and why is it important?

Answer: The normalization technique is a set of rules that are used to organize the relational database to prevent data redundancy and dependency. 

Once initial database objects are identified, normalization helps in identifying the relationships between schema objects.

Different normalization forms are as follows:

First Normal Form (1NF)

Second Normal Form (2NF)

Third Normal Form (3NF)

Boyce-Codd Normal Form (BCNF)

Fourth Normal Form (4NF)

Fifth Normal Form (5NF)


Q #5) Can you list down the different components of physical and logical database structure?

Answer: Given below is the list of different components.

The physical structure includes:

Data files, which hold all the DB objects like tables, views, indexes, etc.

Redo Log files, which maintain the records of database changes as a result of user transactions.

Control files, which maintain the database status and physical structure.

The logical structure includes:

Tablespace, which is a logical storage unit where the database object resides.

Segments are logical storage units only but within a tablespace.

Extent is the logical unit where various contiguous data blocks and extents together form a segment.

A data block is the smallest logical storage unit in the database.


Q #6) What is a SYSTEM tablespace and why do we need it?

Answer: System tablespace is created at the time of database creation. This tablespace holds all the data dictionary tables and hence it is 

used for the creation of various database objects. System tablespace must stay online for the database to function smoothly.


Q #7) What do you mean by SGA and how is it different from PGA?

Answer: SGA means System Global Area is the memory area that is defined by Oracle during instance startup. 

This area can be shared by the system-level processes and hence it is known as the Shared Global Area as well.

PGA is Program Global Area is memory specific to a process or session. It is created when the Oracle process gets 

started and each process will have a dedicated PGA.


Q #8) What is a password file in a database and why is it required when a user can be authenticated using data dictionary tables?

Answer: Database users can be authenticated using data dictionary tables as they store the username & password. 

If the password provided by a user matches the one stored in the database, then the user would be able to log in. However, 

this can happen only if the database is open.

If the database is in shutdown mode, then these tables cannot be accessed and hence password file will be used by the database 

administrators to log in and open the database.


Q #9) What are the different types of backups that are available in Oracle?

Answer: On a higher level, there are 2 types of backup that are available in Oracle which are physical & logical.

During physical backup, copies of physical database files (like data files, control files, redo logs & other executables) 

are created and saved for the future. This can be achieved using either operating system utilities or RMAN.

In contrast, logical backup allows taking a backup of the database objects like tables, views, indexes, stored procedures, etc. 

individually through the Export/Import utility provided by Oracle.


Q #10) What do we mean by hot backup & cold backup and how are they different?

Answer: Hot backup is the process of taking database backup while the database is in running mode. Hence, it is also known as Online Backup. 

While cold backup can be taken only when the database is in shutdown mode and hence it is known as Offline Backup as well.

There are few websites like banking & trading ones, which are 24 hours operational and hence, cannot support bringing the database down. Hence, 

DBAs need to take the backup in online mode only.


Q #11) What is the difference between restoring a database and recovering a database? 

Answer: During the restoration process, backup files are copied from the hard disk, media, or tapes to the restoration location and later make

 the database operational. Recovery has an additional step of updating these data files by applying redo logs so as to recover the changes which 

are not backed up.

Let us understand this with the help of a scenario.

Database full backup is taken on Friday at 11 PM

The database crash happened on Saturday at 7 AM

We can restore the lost files using the 11 PM full backup which is Restoration. However, the data will be restored up till Friday at

 11 PM and not till Saturday at 7 AM. In order to do the same, redo logs can be applied which will bring the database to the point of failure.


Q #12) What do you understand by Redo Log file mirroring?

Answer: Redo log is the most crucial component of database architecture that records all transactions within the database even before it goes to the data file.

Hence, the mirroring of these files is done to protect them. Redo Log file mirroring allows redo logs to be copied to different disks simultaneously.

 And this can be achieved using Data Guard and other utilities.


Q #13) How is incremental backup different from differential backup?

Answer: Incremental backup is known for keeping a backup of only the changed data files since the last backup, which might be full or incremental.

 For Example, An incremental/full backup is done at 10 AM on Friday and the next backup is done at 10 AM Saturday. The second incremental backup will

 only have the transactions that occurred after Friday at 10 AM.

While Differential backup backs up the files that changed during the last full backup.

If you take a full backup on Friday at 10 AM and then differential back up on Saturday at 10 AM, it will take the backup of the files 

changed since Friday, at 10 AM. Further, if the differential backup is taken on Sunday at 10 AM, it will take the backup of the files changed since 

Friday, at 10 AM.


Q #14) What is a Flashback Query and when should it be used?

Answer: Oracle has introduced a flashback technology to recover the past states of database objects. It can recover the accidental changes, 

which got committed as well. Recovery depends on the specified value of the UNDO_RETENTION parameter.

For Example, the UNDO_RETENTION parameter is set to 2 hours, and if a user accidentally deletes the data at 11 AM with a commit performed. 

Then, using FLASHBACK QUERY, he can retrieve these rows until 1 PM only.


Q #15) How is RMAN better than the user-managed backup recovery process?

Answer: Recovery Manager (RMAN) is an Oracle built-in utility that can automate database backup & recovery processes and administrate 

backup strategies as well. In user-managed backups, the user needs to perform backup manually.

RMAN backup time will be less when compared to user-managed backups as RMAN maintains all the metadata in the Central Repository and can 

quickly retrieve the same.

RMAN does incremental backups rather than taking full file backups which are done by user-managed backups, which again saves time.

RMAN creates backup and recovery scripts that can be re-used and scheduled and does not need manual intervention.

RMAN can detect corrupted data blocks automatically during the backup process and recover them, whereas it doesn’t happen in user-managed backups.


Q #16) What is a Recovery Catalog?

Answer: A recovery catalog is a database schema that holds the metadata used by RMAN for restoration and recovery processes.

It basically stores information on

Data files & their backup files.

Archived Redo Logs & their backup sets.

Stored scripts

Incarnation

Backup history

The catalog gets updated once RMAN takes the backup or switches redo log or changes data file.

Q #17) How do you recover a lost control file?

Answer: If one of the control files is lost or gets corrupted, we can use another copy of the control file maintained at the OS level. 

If all the copies of control files are lost or if a user is maintaining only one copy of the control file which gets lost, then a user can

Manually create a control file.

Restore it from the backup control file using the below command.

ALTER DATABASE BACKUP CONTROL FILE TO TRACE;

Restore using RMAN backup by using the below commands.

setdbid XX;

restorecontrolfile;

Q #18) What is the difference between media recovery & crash recovery?

Answer: Media recovery is the process of recovering the database from the backup whenever a disk failure is there. Physical files like data files, 

control files, or server parameter files get recovered during media recovery. However, crash recovery will be performed whenever a database instance failure occurs.

Media recovery needs to be performed by DBA while crash recovery is an automated process that is taken care of SMON background process.

Q #19) What is RAC and what are the various benefits of using RAC architecture?

Answer: RAC or Real Application Cluster allows the database to be installed across multiple servers forming a cluster and sharing the storage 

structure at the same time. This prevents the database from a single point of failure as one or the other instance will always stay up even if 

the other fails.

Using RAC helps in

Maintaining high availability of the system.

Managing workload with the least expenses.

Scalability & agility.

Q #20) How would you differentiate between cluster and grid?

Answer: Clustering is an integral part of grid infrastructure and focuses on a specific objective.

While grid, which may or may not consist of multiple clusters, possesses a wider framework that enables sharing of storage systems, data resources, 

and remaining others across different geographical locations.

A cluster will have single ownership but the grid can have multiple ownership based on the number of clusters it holds.


Q #21) What do you understand from Cache Fusion?

Answer: Cache fusion is the process of transferring data from one instance buffer cache to another at a very high speed within a cluster. 

Instead of fetching data from a physical disk which is a slow process, the data block can be accessed from the cache.

For Example, Instance A wants to access a data block, owned by instance B. It will send an access request to instance B and hence can access 

the same using the other instance B’s buffer cache.


Q #22) How can a single instance environment be converted into the RAC environment and how will they be different?

Answer: Single instance can be converted into RAC using one of the below methods:

Enterprise Manager

DBCA i.e. Database Configuration Assistant

RCONFIG Utility

Single Instance environment Vs RAC Environment

Parameters

Single Instance Environment

RAC Environment

Instance

Instance

Multiple

Memory Instance will have dedicated SGA

Every instance will have separate SGA

Access to physical files Only one instance will access data files

 and control files. Data files and Control Files are shared across all instances.

Flash  Recovery Log

Accessed by single instance.

Shared by multiple instances.

Redo Logs

Dedicated to single instance.

Only one instance can write at a time but others can read data during recovery or archiving process.

Q #23) How can we monitor the space allocations in a database?

Answer: We can use the below data dictionary tables to monitor the space allocations:

DBA_FREE_SPACE

DBA_SEGMENTS

DBA_DATA_FILES

Q #24) What do you understand by “Performance Tuning of DB” & what are the different areas where we can perform tuning?

Answer: It is the process of enhancing database performance by making optimal use of the available resources.

Performance can be enhanced by tuning any of the below areas:

Database design.

Memory allocation.

Disk I/Os.

Database contention.

OS level (CPU).

Q #25) What are the different tools that are provided by Oracle to assist performance monitoring?

Answer: Various tools include:

AWR(Automatic Workload Repository)

ADDM(Automated Database Diagnostics Monitor)

TKPROF

STATSPACK

OEM(Oracle Enterprise Manager)

Q #26) What are the different optimizers that are used to optimize the database?

Answer: There are two types of optimizers:

Rule-Based Optimizer (RBO): If the referenced objects don’t maintain any internal statistics, RBO is used.

Cost-Based Optimizer (CBO): If the referenced objects maintain internal statistics, CBO will check all the possible execution plans and select 

the one with the lowest cost.

Q #27) What is an explained plan and how does it help in optimizing the SQL query?

Answer: An explain plan is a statement that displays the execution plan selected by the Oracle optimizer for SELECT, INSERT, UPDATE & DELETE statements. 

By looking at this plan, one can figure out Oracle’s selection of the right indexes, proper joins & sorts operations, etc.


Q #28) How can we collect the statistics of different database objects?

Answer: ANALYZE statement can be used to collect the statistics of various database objects like tables, indexes, partitions, clusters,s or object 

references. Using this statement we can also identify migrated as well as chained rows within a table or cluster.


Q #29) Why do we need to rebuild indexes?

Answer: Rebuilding indexes is required in order to improve the performance of an application. Due to various INSERT & DELETE operations, the index gets 

fragmented & unstructured, thereby making the application slow. To reorganize data within these indexes, rebuilding is performed.


Q #30) What is TKPROF and how can we use it?

Answer: TKPROF is a tuning utility provided by Oracle which can convert SQL trace files into a readable format.

Once the trace file is generated using SQL Trace Utility, the TKPROF tool can be run against the trace file and the output can be read. It can also 

generate the execution plan for SQL statements. The executable for TKPROF is located in the ORACLE HOME/bin directory.


Q #31) How can we tune a SQL query to optimize the performance of a database?

Answer: Enlisted are a few of the best practices for writing SQL queries.

Column names should be provided instead of * in SELECT statements.

Joins should be used in the place of sub-queries.

EXISTS should be used instead of IN to verify the existence of data.

UNION ALL should be used in the place of UNION.

HAVING should be used only for filtering the resulting rows from the SQL query.

Q #32) How would you identify the SHARED_POOL_SIZE parameter that needs to be adjusted?

Answer: Below is the indications for the same:

Getting an ORA-04031 error.

Degrading the performance even when all the other parameters are already optimized.

Poor library cache/data dictionary hits.

Q #33) What do you understand by Row Chaining?

Answer: When a row is too large that it cannot fit in a block, then it will end up using consequent blocks which leads to the concept of Row Chaining. 

It can be avoided by updating the storage parameters to an appropriate value.


Q #34) What is table partitioning and why is it required?

Answer: It is a process of dividing a table into smaller chunks so as to make data retrieval easy and quick. Each piece will be known as a partition 

and can be accessed separately. Apart from tables, indexes can also be partitioned.


Q #35) How can we identify the resources for which the sessions are waiting?

Answer: We can find it out using v$session_waits and v$ system _waits.


PLS-00201: identifier 'SYS.XDB_MIGRATESCHEMA' must be declared

 [applmgr@oracle ~]$ sqlplus apps/*****@DB

SQL > alter package AD_ZD_PREP compile body;

Warning: Package Body altered with compilation errors.

SQL> show error;

Errors for PACKAGE BODY AD_ZD_PREP:

LINE/COL ERROR

-------- -----------------------------------------------------------------

467/3    PL/SQL: SQL Statement ignored

467/19   PL/SQL: ORA-00942: table or view does not exist

471/5    PL/SQL: SQL Statement ignored

471/21   PL/SQL: ORA-00942: table or view does not exist

491/5    PL/SQL: Statement ignored

491/5    PLS-00201: identifier 'SYS.XDB_MIGRATESCHEMA' must be declared

505/5    PL/SQL: SQL Statement ignored

505/21   PL/SQL: ORA-00942: table or view does not exist


Solution:

SQL> select owner, object_name from dba_objects where lower(object_name) ='xdb_migrateschema';

no row select

# oracle@ora admin]$ sqlplus / as sysdba

SQL> grant READ on DIRECTORY FND_DIAG_DIR to APPS;

SQL> grant WRITE on DIRECTORY FND_DIAG_DIR to APPS;

SQL>@$ORACLE_HOME/rdbms/admin/dbmsxdbschmig.sql

SQL>@$ORACLE_HOME/rdbms/admin/prvtxdbschmig.plb

SQL> select owner, object_name from dba_objects where lower(object_name) ='xdb_migrateschema';

OWNER              OBJECT_NAME

SYS         XDB_MIGRATESCHEMA

SYS         XDB_MIGRATESCHEMA

2 rows selected.


# cd /u01/db/tech_st/12.1.0.2/appsutil/admin

adgrants.sql version > 120.67.12020000.69 2020/03/20 00:56:38

# sqlplus /nolog

# @/adgrants.sql apps

[applmgr@oracle ~]$ sqlplus apps/*****@DB

SQL> alter package AD_ZD_PREP compile body;

Package body altered.

 SQL > select owner,object_name,object_type from dba_objects where status='INVALID' and object_name like 'AD_ZD%';

 SQL > select object_type,status from dba_objects where object_name='AD_ZD_PREP';



UNEXPECTED Nodes X are listed in the ADOP_VALID_NODES table but not in the FND_NODES table

Error : 

adop phase=FS_CLONE failed with below error.

Validating system setup.

    UNEXPECTED Nodes X are listed in the ADOP_VALID_NODES table but not in the FND_NODES table.

    [UNEXPECTED]To correct this, run AutoConfig on nodes ""

    [UNEXPECTED]Error determining whether this is a multi-node instance

[STATEMENT] Please run adopscanlog utility, using the command

"adopscanlog -latest=yes"

to get the list of the log files along with snippet of the error message corresponding to each log file.

adop exiting with status = 2 (Fail)


Solution:


01. Backup the fnd_oam_context_files, fnd_nodes, and adop_valid_nodes tables in the EBS env nodes:

      sqlplus applsys/pwd

sqlplus applsys/****@PDB

create table fnd_oam_context_files_bkp26jan as select * from fnd_oam_context_files;

create table fnd_nodes_bk26jan23 as select * from fnd_nodes;

create table adop_valid_nodes_bk26jan23 as select * from adop_valid_nodes;

02. Truncate the following tables:

truncate table fnd_oam_context_files;

truncate table fnd_nodes;

truncate table adop_valid_nodes;

03.  Run AutoConfig on the DB tier

        Confirm Autoconfig completes successfully

04. Stop application service.

05. Run Autoconfig on the run file system.

echo $FILE_EDITION

run

Confirm Autoconfig completes successfully

Before running Autoconfig on the patch file system the ebs_login trigger MUST be disabled

After the successful completion of Autoconfig the ebs_login trigger MUST be re-enabled.

06.Logon to sqlplus as system(sqlplus system/****@PDB): alter trigger ebs_logon disable;

07.  Run Autoconfig on the patch file system

echo $FILE_EDITION

patch

08. Logon to sqlplus as system(sqlplus system/****@PDB): alter trigger ebs_logon enable;

09. After Autoconfig has been run successfully on all nodes, run the following two (2) queries in 

order to verify the tables have been correctly populated:

SQL>

        set pagesize 5

        set linesize 132

        col node_name format a15

        col server_id format a8

        col server_address format a15

        col platform_code format a4

        col webhost format a12

        col domain format a20

        col virtual_ip format a12

select  node_id,  platform_code, support_db D, support_cp C, support_admin A, 

support_forms F, support_web W, node_name, server_id, server_address, domain, webhost, virtual_ip, 

status from fnd_nodes order by node_id;

SQL>

        set pagesize 5

        set linesize 132

        col NAME format A20

        col VERSION format A12

        col PATH format A110

        col STATUS format A10

select NAME,VERSION,PATH, STATUS from FND_OAM_CONTEXT_FILES;

 10. Start application service.

APP-FND-01388: Cannot read value for profile option CONC_KEEP_BLANK_FILES in routine &ROUTINE

 Application version : R12.2.10

DB version : 19.17.0

ERROR

APP-FND-01388: Cannot read value for profile option CONC_KEEP_BLANK_FILES in routine &ROUTINE.

CAUSE

The issue is caused by the following defect:

BUG 30983111 - APP-FND-01388: CANNOT READ VALUE FOR PROFILE OPTION CONC_KEEP_BLANK_FILES

SOLUTION

To implement the solution, please execute the following steps:

1) Download and review the readme for Patch  30983111

2) Apply Patch 30983111 in a test environment.

3.Confirm the following file versions:

For 12.1: fnd src/process     afpprc.lpc 120.10.12010000.10

For 12.2: fnd src/process     afpprc.lpc 120.10.12020000.10

4) Retest the issue.

5) If the issue is resolved, migrate the solution as appropriate to other environments.

Note: Patch 30983111 exists for both 12.1.3 and 12.2 EBS versions.

The warning will be shown when it completes in warning.

One may set the profile option "Concurrent: Keep Empty Output Files" to avoid the error.

Also the error is harmless and technically can be ignored.


REFERENCES:

 (Doc ID 2736293.1)

WebADI Create Document Redirects From HTTP To HTTPS Causing Error ERR_SSL_PROTOCOL_ERROR

 Error ERR_SSL_PROTOCOL_ERROR 


SYMPTOMS

On Oracle EBS R12.2.10 the default URL is HTTP:// but when you navigate to Desktop Integrator -> 

Create Document the URL of the webpage redirects to HTTPS://.


STEPS

1) Using the Desktop Integrator responsibility.

2) Navigate to create document.

3) Select the Integrator, Layout and populate needed parameters.

4) Select Creating document and view the that URL that is returned to create the document is https and not http.

The following error is seen in the browser window when attempting to create the document:

This site can't provide a secure connection

ERR_SSL_PROTOCOL_ERROR

 Due to this issue, the document is not created.


SOLUTION

1) View the following line in the XML context file: <sslterminator oa_var=

2) If the current value is <sslterminator oa_var=”s_enable_sslterminator”>#</sslterminator>

Please change the following line in the XML context file: From <sslterminator oa_var=”s_enable_sslterminator”>#</sslterminator> to <sslterminator oa_var=”s_enable_sslterminator”/>

3) Run the Autoconfig.

4) Restart the Services and Re-test.

Note: that if the XML context file line value is currently <sslterminator oa_var=”s_enable_sslterminator”/>, modify the line to <sslterminator oa_var=”s_enable_sslterminator”>#</sslterminator>

follow the same steps as above and re-test. 


Take backup of $CONTEXT_FILE

cd $CONTEXT_FILE

cd /apps/R12.2.10/fs2/inst/apps/test/appl/admin/test_oracle.xml

Replace sslterminator value in $CONTEXT_FILE

From

<sslterminator oa_var="s_enable_sslterminator"/>

TO

<sslterminator oa_var="s_enable_sslterminator">#</sslterminator>

save it.


[applmgr@oracle~#]cd $ADMIN_SCRIPTS_HOME

./adautocfg.sh

Start appslication service

[applmgr@oracle~#]cd $ADMIN_SCRIPTS_HOME

./adstrtal.sh apps/*****

Retest issue.


REFERENCES:

(Doc ID 2855758.1)

ORA-08004: sequence IEX_DEL_BUFFERS_S.NEXTVAL exceeds MAXVALUE

 Error:- IEX: Scoring Engine Harness Error - ORA-08004: sequence IEX_DEL_BUFFERS_S.NEXTVAL exceeds MAXVALUE (Doc ID 2056754.1) SYMPTOMS:- Yo...