Upgraded database to 12.1.0.2 all database users account became EXPIRED & LOCKED

upgraded EBS 12.2.4 database which was on 11.2.0.3 using DBUA and while connecting to APPS used it was saying account is locked.
When checked account status from DBA_USERS view it was revealed, all the database users account became EXPIRED & LOCKED.

Still not sure what triggers to it. Probably SQLNET.ALLOWED_LOGON_VERSION = 8 was set in sqlnet.ora and Oracle sets sec_case_sensitive_logon=TRUE while upgrade the database using DBUA. However accounts were expired.
Since all EBS database accounts were expired, It was not possible to
directly to change the password to a different value as in EBS it is handled differently and EBS database user password should match with encrypted passwords stored in FND_ORACLE_USERID table.

However I was aware of the fact that the if same database password is set again using “ALTER USER INDENTIFIED BY VALUE”.
And account will be opened and can be used provided that password version still supports 10g version password.
Even it can be set to use 10G password only.

It can be verified from view DBA_USERS column PASSWORD_VERSIONS if it stll has 10g.

SQL> select distinct PASSWORD_VERSIONS from dba_users;

PASSWORD_VERSIONS
—————–
10G
10G 11G
10G 11G 12C

Since EBS database accounts are coming from older versions of database it still supports 10g password version.

From 11G onward password are stored differently and are stored in user$ in different columns.

Solution:

We need to unlock the accounts using “Alter User account unlock” for all the users.
It can be done using a script.

After that we need to set same passwords for users having PASSWORD_VERSIONS 10G.

This can also be achived using a script.

set lines 500
set long 2000000
set longchunksize 20000
spool chgpwd.sql
SELECT ‘ALTER USER ‘|| name ||’ IDENTIFIED BY VALUES ‘||chr(39)|| spare4 ||’;’|| password ||chr(39)||’;’ FROM user$ WHERE name in (select USERNAME from dba_userswhere ACCOUNT_STATUS=’EXPIRED’)
spool off

Then run chgpwd.sql. It will set same password for all the accounts.

Now everything became OK  proceeded  with next post database upgrade steps.

compilation of database objects is taking time during a patch application in online patching

Compilation of database objects is taking time during a patch application in online patching and dba_objects is not showing it.

In this case we need to query ad_objects to get the actual no. of invalids in EBS 12.2 in which a new patch edition is database is created during prepare phase of adop cycle.

 
While adop is compiling objects and taking time we can use following two queries to get the stub objects which are still need to be compiled.

select count(*)
from ad_objects
where status = ‘INVALID’;

select * from
(
select
eusr.user_name owner
, count(decode(obj.type#,88,NULL,decode(obj.status,1,NULL,1))) Actual
, count(decode(obj.type#,88,decode(obj.status,1,NULL,1),NULL)) Stub
, count(decode(obj.type#,88,decode(obj.status,1,NULL,1),decode(obj.status,1,NULL,1))) Total
from
sys.obj$ obj
, sys.obj$ bobj
, (
select
xusr.user#
, xusr.ext_username user_name
, ed.name edition_name
from
(select * from sys.user$ where type# = 2) xusr
, (select * from sys.obj$ where owner# = 0 and type# = 57) ed
where xusr.spare2 = ed.obj#
union
select
busr.user#
, busr.name user_name
, ed.name edition_name
from
(select * from sys.user$ where type# = 1 or user# = 1) busr
, (select * from sys.obj$ where owner# = 0 and type# = 57) ed
where ed.name = ‘ORA$BASE’
) eusr
where obj.owner# = eusr.user#
and bobj.obj#(+) = obj.dataobj#
and obj.type# not in (0, 10)
and ( obj.type# 88 or (obj.type# = 88 and bobj.type# 10) )
and obj.remoteowner is null
group by eusr.user_name
) x
where total > 0
order by 1;

However you may observe that compilation completes leaving many invalid objects in newly editioned database.

If the patch is completed then remaining invalid stub objects would be compiled when adop cutover phase is executed.

 

 

 

Restore a database when database and server both are lost to a different host

It was way back in 2009 I was a new joiner of a company and was less than one month old.

I was assigned a task to restore database from RMAN backup to a different host.

This database was used with ODI 10g.
database version was 10.2.0.3.

I only have the name of database and server name where it was resided.

database name: CMPSOCRC
server name : usmliu156

The backups were available to veritas netbackup.

Solution: Below steps were followed to re-create database

1.
Modify init.ora file on new server and start instance.

2.
Connect to catalog database and find DB_KEY and DBINC_KEY for latest incarnation of database.

SQL> SELECT DB_KEY,DBINC_KEY ,RESETLOGS_CHANGE#,NAME,STATUS,CURRENT_INCARNATION,RESETLOGS_TIME,DBID FROM RC_DATABASE_INCARNATION WHERE NAME=’CMPSOCRC’;

DB_KEY DBINC_KEY RESETLOGS_CHANGE# NAME STATUS CURRENT_I RESETLOGS_TIME DBID
———- ———- —————– —————————— ———————— ——— —————— ———-
1210318 1210319 1253323 CMPSOCRC CURRENT YES 13-JUN-08 2441445747
1210318 1210330 707175 CMPSOCRC PARENT NO 21-MAY-08 2441445747

Find filenames from rc_backup_datafile and rc_tempfile

SQL> col name format a50
SQL> select file#,name from rc_backup_datafile where DB_NAME=’CMPSOCRC’ and DB_KEY=1210318 and DBINC_KEY=1210319;

FILE# NAME
———- ————————————————–
1 /iu156/u03/oradata/data/cmpsocrc/system01.dbf
2 /iu156/u03/oradata/data/cmpsocrc/undotbs01.dbf
3 /iu156/u03/oradata/data/cmpsocrc/sysaux01.dbf
4 /iu156/u03/oradata/data/cmpsocrc/users01.dbf
5 /iu156/u03/oradata/data/cmpsocrc/user_data01.dbf
6 /iu156/u03/oradata/data/cmpsocrc/orabpel.dbf
7 /iu156/u03/oradata/data/cmpsocrc/oraesb.dbf
8 /iu156/u03/oradata/data/cmpsocrc/orawsm.dbf
9 /iu156/u01/app/oracle/cmpsocrc/10.2.0/dbs/ORAUDDI.dbf

SQL> select FILE#,NAME from RC_TEMPFILE where DB_NAME=’CMPSOCRC’ and DB_KEY=1210318 and DBINC_KEY=1210319;

FILE#
———- NAME
——————————————————————————–
1
/iu156/u02/oradata/data/cmpsocrc/temp01.dbf

3.
Perform a restore and recovery

NB_ORA_CLIENT is the old host name where database was originally residing

rman taget / catalog rman/rman@rmant

set dbid=2441445747

run {
allocate channel c1 type sbt parms ‘ENV=(NB_ORA_CLIENT=usmliu156)’;
restore controlfile;
}

sql ‘alter database mount’;

run {
allocate channel c1 type sbt parms ‘ENV=(NB_ORA_CLIENT=usmliu156)’;
set newname for datafile 1 to ‘/iu167/u04/oradata/data/cmpsocrc/system01.dbf’;
set newname for datafile 2 to ‘/iu167/u04/oradata/data/cmpsocrc/undotbs01.dbf’;
set newname for datafile 3 to ‘/iu167/u04/oradata/data/cmpsocrc/sysaux01.dbf’;
set newname for datafile 4 to ‘/iu167/u04/oradata/data/cmpsocrc/users01.dbf’;
set newname for datafile 5 to ‘/iu167/u04/oradata/data/cmpsocrc/user_data01.dbf’;
set newname for datafile 6 to ‘/iu167/u04/oradata/data/cmpsocrc/orabpel.dbf’;
set newname for datafile 7 to ‘/iu167/u04/oradata/data/cmpsocrc/oraesb.dbf’;
set newname for datafile 8 to ‘/iu167/u04/oradata/data/cmpsocrc/orawsm.dbf’;
set newname for datafile 9 to ‘/iu167/u04/oradata/data/cmpsocrc/ORAUDDI.dbf’;
restore database;
recover database;
}

4.

While recovery is being performed get the redolog file location and Create soft link for redolog files with original locations on new host.

SQL> select name from RC_REDO_LOG where DB_NAME=’CMPSOCRC’ and DB_KEY=1210318 and DBINC_KEY=1210319;

NAME
——————————————————————————–
/iu156/u02/oradata/redo/cmpsocrc/redo01a.log
/iu156/u03/oradata/redo/cmpsocrc/redo01.log
/iu156/u02/oradata/redo/cmpsocrc/redo02a.log
/iu156/u03/oradata/redo/cmpsocrc/redo02.log
/iu156/u02/oradata/redo/cmpsocrc/redo03a.log
/iu156/u03/oradata/redo/cmpsocrc/redo03.log

5.
Once restore and recovery is done. do resetlogs.

sql> alter database resetlogs;

6.
recreate redologfiles at new location

alter database drop logfile group 3;

alter database add logfile group 3 (‘/iu167/u04/oradata/data/cmpsocrc/redo3a.log’,’/iu167/u04/oradata/data/cmpsocrc/redo3b.log’) size 52428800

alter database add logfile group 1 (‘/iu167/u04/oradata/data/cmpsocrc/redo1a.log’,’/iu167/u04/oradata/data/cmpsocrc/redo1b.log’) size 52428800;

7.
Find Temp file names from v$tempfile

Drop the original temp files and create new temp files at new loactions

“Enq: SS Contention” and “Enq: TS Contention” waits in a RAC database in EBS environment

In a EBS environment, where database is heavily used, we can see “Enq: SS Contention” and “Enq: TS Contention” in a RAC instance even when we have a large temporary tablespace.

In EBS everything is executed through user APPS and temporary tablespace is set to TEMP for this user and we do have a large space in tablespace.

However we still see that suddenly database sessions are started getting wait “Enq: SS Contention” in an instance.

When we check v$sort_segment it has sufficient extents available.
However it was available to a different instance.
The instance which was getting SS enqueue has nearly 0 free extents.

It can be verified by following.

SELECT INST_ID,TABLESPACE_NAME,TOTAL_EXTENTS,USED_EXTENTS,FREE_EXTENTS from GV$SORT_SEGMENT;

It can happen if a database session running in an instance was consuming a large number of extents and finishedand that instance now has all the extents in FREE_EXTENTS.

If database sessions in another instance are started using temporary EXTENTS then it will not get the extent immediate as it is held by previous instance and database sessions started getting “Enq: SS Contention” and “Enq: TS Contention”.

Solution:

We can free the extents from the instance which is holding it.
it can be done by running following in the instance which is occupying the tablespace

alter session set events ‘immediate trace name DROP_SEGMENTS level TS#+1’;

where ts# is tablespace number for temporary tablespace.

However it can be done faster by using script.
The following script will immediately free the space and database sessions will no longer get “Enq: SS Contention”

Declare
l_ts_number number;
L_drop_segs varchar2(1000);
Begin
FOR I in 1..10000 LOOP
select ts# INTO l_ts_number from ts$ A, DBA_USERS B where username=’APPS’ and A.NAME=B.TEMPORARY_TABLESPACE;
l_ts_number:=l_ts_number+1;
L_drop_segs:=’alter session set events ‘||chr(39)||’immediate trace name DROP_SEGMENTS level ‘||l_ts_number||chr(39);
EXECUTE IMMEDIATE L_drop_segs;
END LOOP;
End;
/

Read only user in database 12.1.0.2 for EBS

We always need to create a read only user which is also to run SELECT on all the tables in database for developers.
Generally it is accomplished by two ways:

 

1. Create a role and grant SELECT privilege on the tables and later role can be granted to individual users.

2. Create a role and grant SELECT ANY TABLE privilege to it and later role can be granted to individual users.

However it has following drawback:

It also allows to lcok table in EXCLUSIVE mode
and the rows of a table can be locked using SELECT FOR UPDATE.

Now Oracle database 12.1.0.2 can with a new privilege called READ privilege.
We can grant READ table privilege to enable users query database tables, views, materialized views, and synonyms.
The READ ANY TABLE privilege enables to query any table or view in the database.

It is just similar to SELECT privilege However it does not grant ability to lock the table or lock the rows from a table.

 

 

 

 

What “clscfg -patch” does in 12.1.0.2 Grid Infrastructure

12c Grid Infrastructure introduces software patch level and active patch level to ensure GI home has identical patches on all nodes. Unless the cluster is in rolling patch mode, a node won’t be able to join the cluster if it has different patch level.

At the end of a patch application to GI home, the software patch level will be updated on each node. On the last node, the active patch level will be updated for the cluster.

Beginning with 12c Oracle also stores the patchlevel of a cluster in the OCR File. 

It’s highly recommended to manually take an OCR backup with “ocrconfig -manualbackup” command before and after applying a patch.

The patch level is stored in OCR, special consideration is needed when restoring an OCR backup.

Sometimes after restore of OCR from backup following can happen.

# crsctl stop rollingpatch

CRS-1170: Rejecting rolling patch mode change because the patch has not been applied yet. The software patch level [0] on this node is not the same as the expected patch level [1650217826].

clscfg –patch updates the patch level on all nodes in OCR.

#clscfg -patch
clscfg: -patch mode specified
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 12c Release 1.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user ‘root’, privgrp ‘root’..
Operation successful.

# crsctl stop rollingpatch
CRS-1161: The cluster was successfully patched to patch level [1650217826].

clscfg -patch command also becomes handy when a patch application is failed and rolled back. However OCR has the entry for applied patch which is rolled back. Running it will update OCR with correct patch level.

 

We can also use kfod op=patches command to get applied patches on all the cluster nodes

 

OPatchAuto behavior in 12.1.0.2 Grid infrastructure while applying a patch

OPatchAuto supports two modes of patching a GI or RAC Home –

Rolling and Non-rolling.
When a patching session is started off (on the first node), the stack has to be up and running on this node.
This applies to both rolling and non-rolling modes of patching.

Rolling Mode (Default Mode):

When performing patching in Rolling mode, the ORACLE_HOME processes on a particular node are shut down, the patch is applied, then the node is brought back up again.

This process is repeated for each node in the GI or RAC environment until all nodes are patched. This is the most efficient mode of applying an interim patch to an Oracle RAC setup because this results in no downtime.

Not all patches can be applied using Rolling mode. Whether or not a patch can be applied in this way is generally specified in the patch metadata.

The patch README also specifies whether or not a patch can be applied in Rolling mode.

The node (GI Home) from which the opatchauto command is executed is considered the LOCAL node and all other nodes are considered REMOTE nodes.

When we begin a rolling mode session, at least 1 remote node has to be up and running.

OPatchAuto applies patches in rolling mode by default.

Non-rolling Mode:

Prior to 12c, a non-rolling upgrade was defined as shutting down Oracle processes on all nodes.

Beginning with 12c, non-rolling patching requires the GI stack to be up on local node.

The patching operation on first and last node have special steps to perform hence the operation needs to be handled separately but not in parallel with other nodes.

Beginning with 12c, non-rolling patching occurs in three phases:
1. Patch Node 1
2. Patch Node 2 through n-1
3. Patch Node n

When we start a non-rolling mode session none of the remote nodes can up and running: All nodes (including the local node) must be stopped.

To run OPatchAuto in non-rolling mode, we must explicitly specify the -nonrolling option.

When a patch is applied using opatchauto on 12.1..0.2 using “opatchauto apply” it uses rolling mode to apply a patch and uses following to put patch application in rolling mode.

crsctl start rollingpatch

If it is already in rolling upgrade mode then it fails with following

Oracle Clusterware active version on the cluster is [12.1.0.2.0]. The cluster upgrade state is [ROLLING PATCH]. The cluster active patch level is [0].
PROC-28: Oracle Cluster Registry already in current version
CRS-1153: There was an error setting Oracle Clusterware to rolling patch mode.
CRS-4000: Command Start failed, or completed with errors.
2015/08/09 11:42:06 CLSRSC-430: Failed to start rolling patch mode

We can run following to get
crsctl query crs activeversion –f

Oracle Clusterware active version on the cluster is [12.1.0.2.0]. The cluster upgrade state is [ROLLING PATCH]. The cluster active patch level is [0].

Solution:

1:
Provided in CRS-1153: There was an error setting Oracle Clusterware to rolling patch mode. (Doc ID 1943498.1)

crsctl stop rollingpatch

 

2:

If GI 12.1.0.2 is down on other nodes then also opatchauto can not put cluster patch upgrade in rolling mode.

We need to verify if ASM is running on other nodes and start it if it is not already running.

Ruuning adcfglone.pl for EBS 12.2 with AD.C.DELTA.7 and later automatically runs $FND_TOP/bin/txkrun.pl -script=SetSSOReg -removereferences=Yes

It runs following command to remove references.

$FND_TOP/bin/txkrun.pl -script=SetSSOReg -removereferences=Yes

If we have a OAM SSO enabled for EBS 12.2 then running removereferences removes all the rows from table fnd_user_preferences including MODULE_NAME OID_CONF.

And If we want to register again to OID then we need to deregister and reregister the cloned EBS 12.2 to OID by running following commands.

$FND_TOP/bin/txkrun.pl -script=SetSSOReg -deregisteroid=yes

$FND_TOP/bin/txkrun.pl -script=SetSSOReg -registeroid=yes

It registers the EBS 12.2 with OID correctly.
However row entries for MODULE_NAME OID_CONF are not inserted in table fnd_user_preferences
that causes to produce the errors when we create user in EBS as it will not be able provision it to OID.

In this case we need to run following to populate entries for OID_CONF in table fnd_user_preferences

fnd_oid_plug.setPlugin

If remove references is not run and we run deregister oid then it keeps OID_CONF entries in table fnd_user_preferences intact.

Running datapatch during 12.1.0.2 upgrade

As per Oracle documentation  datapatch runs during upgrade automatically when we do a run upgrade manually while upgrading to  12.1.0.x.
It does not run when we upgrade the database when use DBUA utility for upgrade.

However it is not true for 12.1.0.2 upgrade, Oracle runs datapatch even we upgrade database using dbua.
It can be verified by datapatch log file.

However there is a classic case when Oracle skips runing datapatch during upgrade.

Installed 12.1.0.2_Oracle_home for upgrade of EBS 12.2 which was running 11.2.0.3.

Later I had to apply a EBS patch on a different EBS 12.2 instance residing on same server.
For it I changed /etc/oraInst.loc to point to EBS 12.2 where I was applying the patch.

(Note: We maintain different inventory for each install of EBS 12.2 and it is mandatory to have oraInventory if we run adop utility)

Cause:

Database patches are being verified from inventory for running datapatch  and Oracle skipped it during upgrade as it was not able to find right inventory.

Solution:

In this case we have to change the inventory to point to the Oracle_Home having database being upgraded and run datapatch manually

./datapatch -verbose

It will complete datapatch installation.

Moral of story is to verify the upgrade if it have been happened completely.

EBS 12.2 OID profile setting Failed With Error ‘ORA-20002: Seed Data Table FND_PROFILE_OPTION_VALUES has not been prepared for patching’ in patch file system

Following Document 1371932.1, Integrating Oracle E-Business Suite Release 12.2 with Oracle Internet Directory 11gR1. for integration with EBS 12.2 with OID.

Below scripts are failed to run in PATCH file system after adop phase=prepare succussfully run

set serveroutput on
DECLARE
stat BOOLEAN;
BEGIN
stat := FND_PROFILE.SAVE(‘APPS_SSO_OID_IDENTITY’,’Y’,’SITE’);
IF stat THEN
dbms_output.put_line( ‘Profile APPS_SSO_OID_IDENTITY updated with Enabled ‘ );
ELSE
dbms_output.put_line( ‘Profile APPS_SSO_OID_IDENTITY could NOT be updated with Enabled’ );
commit;
END IF;
END;
/
set serveroutput on
DECLARE
stat BOOLEAN;
BEGIN
stat := FND_PROFILE.SAVE(‘APPS_SSO_LINK_SAME_NAMES’,’Y’,’SITE’);
IF stat THEN
dbms_output.put_line( ‘Profile APPS_SSO_LINK_SAME_NAMES updated with Enabled’ );
ELSE
dbms_output.put_line( ‘Profile APPS_SSO_LINK_SAME_NAMES could NOT be updated with Enabled’ );
commit;
END IF;
END;
/

DECLARE
stat BOOLEAN;
begin
stat := FND_PROFILE.SAVE(‘APPS_SSO’, ‘SSWA_SSO’, ‘SITE’);
IF stat THEN
dbms_output.put_line( ‘Profile APPS_SSO updated with SSWA_SSO’ );
ELSE
dbms_output.put_line( ‘Profile APPS_SSO could NOT be updated with
SSWA_SSO’ );
commit;
END IF;
end;
/

Scripts failed with the following error:

DECLARE
*
ERROR at line 1:
ORA-20002: Seed Data Table FND_PROFILE_OPTION_VALUES has not been prepared for
patching
ORA-06512: at “APPS.FND_PROFILE_OPTION_VALUES+”, line 1
ORA-04088: error during execution of trigger ‘APPS.FND_PROFILE_OPTION_VALUES+’
ORA-06512: at “APPS.FND_PROFILE_OPTION_VALUES_PKG”, line 330
ORA-06512: at “APPS.FND_PROFILE”, line 2339
ORA-06512: at line 4

Solution

It can be worked around to defer it to run in RUN file system after adop cycle is completed and file systems are flipped.

Now Oracle have provided solution of it in Doc ID 2244443.1.