ODA Failed To Start CRS On Local Nodes When Patching to 12.1.2.12. Failed To Patch Server (grid) Component is Skipped

(Doc ID 2385304.1)

Last updated on JUNE 27, 2018

Applies to:

Oracle Database Appliance X5-2 - Version All Versions to All Versions [Release All Releases]
Oracle Database Appliance Software - Version 12.1.2.11 to 12.1.2.12 [Release 12.1]
Information in this document applies to any platform.

Symptoms

ODA patching from 12.1.2.11.0 to 12.1.2.12.0

Existing ODA software version : ? 12.1.2.11.0
Upgrading to which ODA software version : ? 12.1.2.12.0

The GI patch is failing on one node of the ODA (either or both)

During the patching all other FW, OS and storage components are patched up to 12.1.2.12 except for the GI which is skipped without errors

Once the GI portion is skipped then this fails to upgrade the GI.

Repeating the procedure verifies all other components are upgraded except for the GI, and continues to skip the GI portion of the patching without error.

Duration:

The attempt to patch the GI will take over 5 minutes before we see the patch failure.

There is no specific indication as to why this takes so long at this phase of the GI patching.
What is confirmed is that the CRS is not restarted within this 5 minute + time period

CRS fails to startup during the patching window for the GI.

However, immediately after the patching fails CRS can be restarted manually.
There does not appear to be problem with CRS after the patching.
CRS can be started or stopped manually or during shutdown or startup procedures.


Retrying the patch results in the same outcome.

After executing the recommended cleanup job 

"oakcli show version -detail "showed that the GI was not patched

Retried the /opt/oracle/oak/bin/oakcli update -patch 12.1.2.12.0 --server (per REF) below
Restart of the CRS manually works however without problem.

You can confirm that the GI is not up to the expected version of CRS and OCW using the OAKCLI SHOW VERSION -DETAIL commands

The patchset level for the GI can also be confirmed by reviewing OPatch


#2

> CRS-5017: The resource action "ora.01a.01a.svc clean" encountered the following error:

Using configuration parameter file: /u01/app/12.1.0.2/grid/crs/install/crsconfig_params
Oracle Clusterware active version on the cluster is [12.1.0.2.0].

 The cluster upgrade state is [NORMAL].
 The cluster active patch level is [1696621382].

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'oda01'
CRS-2673: Attempting to stop 'ora.crsd' on 'oda01'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'oda01'
CRS-2673: Attempting to stop 'ora.FLASH.dg' on 'oda01'
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'oda01'
CRS-2673: Attempting to stop 'ora.RECO.dg' on 'oda01'
CRS-2673: Attempting to stop 'ora.REDO.dg' on 'oda01'
CRS-2673: Attempting to stop 'ora.redo.o01.acfs' on 'oda01'
CRS-2673: Attempting to stop 'ora.reco.o02.acfs' on 'oda01'
CRS-2673: Attempting to stop 'ora.reco.o03.acfs' on 'oda01'
CRS-2673: Attempting to...

CRS-2673: Attempting to stop 'ora.reco.o03.acfs' on 'oda01' CRS-2673: Attempting to...

> CRS-5017: The resource action "ora.01a.db start" encountered the following error:

> ORA-01565: error in identifying file '/u02/app/oracle/oradata/datastore/.ACFS/snaps/01a/01a/spfile01a.ora'
> Linux-x86_64 Error: 2: No such file or directory
> CRS-5017: The resource action "ora.01a.01.svc start" encountered the following error:

 Other error messages point to CRS now being restarted in the expected time period

 

 

Changes

 Patching GI to 12.1.2.12

Cause

Sign In with your My Oracle Support account

Don't have a My Oracle Support account? Click to get started

My Oracle Support provides customers with access to over a
Million Knowledge Articles and hundreds of Community platforms