My Oracle Support Banner

PROT-1 PROC-26 Error During 19c Grid Infrastructure Upgrade (Doc ID 2667198.1)

Last updated on APRIL 17, 2023

Applies to:

Oracle Database - Enterprise Edition - Version 11.2.0.4 and later
Information in this document applies to any platform.

Symptoms

Source environment is a multi-node cluster in 11.2.0.4 with storage on non-ASM and OCR backup on NFS mount. Following error appear while upgrade to 19c Grid Infrastructure while running rootupgrade.sh. 

rootupgrade.sh logs at <ORACLE_BASE>/crsdata/<node name>/crsconfig/rootcrs_<HOSTNAME>_<TIMESTAMP>.log:

2020-03-19 16:40:54: Executing cmd: <19c GRID_HOME>/bin/clscfg -localupgrade
2020-03-19 16:40:56: Command output:
> clscfg: EXISTING configuration version 0 detected.
> Successfully deleted 2 keys from OCR.
> Creating OCR keys for user 'root', privgrp 'root'..
> Operation successful.
>End Command output
2020-03-19 16:40:56: Keys created in the OLR successfully
2020-03-19 16:40:56: Executing the step [setup_sandbox_ocr_step_2] to setup sandbox ocr.
2020-03-19 16:40:56: Setting up OCR for sandbox
2020-03-19 16:40:56: ocrloc = <ORACLE_BASE>/crsdata/<HOSTNAME>/sandbox/ocr.loc, data = <ORACLE_BASE>/crsdata/<HOSTNAME>/sandbox/data.ocr, mirror = <ORACLE_BASE>/crsdata/<HOSTNAME>/sandbox/mirror.ocr
2020-03-19 16:40:56: Executing cmd: /ora_grid/11.2.0.4/bin/ocrconfig -import <ORACLE_BASE>/crsdata/<HOSTNAME>/sandbox/sboldocr.exp
2020-03-19 16:40:56: Command output:
> PROT-1: Failed to initialize ocrconfig <<<<<<<<<
> PROC-26: Error while accessing the physical storage Operating System error [No such file or directory] [2] <<<<<<<<<
>End Command output
2020-03-19 16:40:56: ocr restore output : PROT-1: Failed to initialize ocrconfig PROC-26: Error while accessing the physical storage Operating System error [No such file or directory] [2]
2020-03-19 16:40:56: Executing cmd: <19c GRID_HOME>/bin/crsctl sandbox clean
2020-03-19 16:40:56: crsctl sandbox clean output =
2020-03-19 16:40:56: Upgrade dry run failed <<<<<<<<<<<<<<<<<<
2020-03-19 16:40:56: Executing cmd: <19c GRID_HOME>/bin/clsecho -p has -f clsrsc -m 694
2020-03-19 16:40:56: Executing cmd: <19c GRID_HOME>/bin/clsecho -p has -f clsrsc -m 694
2020-03-19 16:40:56: Command output:
> CLSRSC-694: failed to validate CRS entities for upgrade, aborting the upgrade <<<<<<<<<<<<<<<<<<
>End Command output
2020-03-19 16:40:56: CLSRSC-694: failed to validate CRS entities for upgrade, aborting the upgrade
2020-03-19 16:40:56: Executing cmd: <19c GRID_HOME>/bin/clsecho -p has -f clsrsc -m 362
2020-03-19 16:40:56: Executing cmd: <19c GRID_HOME>/bin/clsecho -p has -f clsrsc -m 362
2020-03-19 16:40:56: Command output:
> CLSRSC-362: The pre-upgrade checks failed, aborting the upgrade
>End Command output
2020-03-19 16:40:56: CLSRSC-362: The pre-upgrade checks failed, aborting the upgrade
2020-03-19 16:40:56: ###### Begin DIE Stack Trace ######
2020-03-19 16:40:56: Package File Line Calling
2020-03-19 16:40:56: --------------- -------------------- ---- ----------
2020-03-19 16:40:56: 1: main rootcrs.pl 355 crsutils::dietrap
2020-03-19 16:40:56: 2: crsupgrade crsupgrade.pm 3772 main::__ANON__
2020-03-19 16:40:56: 3: crsupgrade crsupgrade.pm 1883 crsupgrade::upgrade_dry_run
2020-03-19 16:40:56: 4: crsupgrade crsupgrade.pm 1079 crsupgrade::get_oldconfig_info
2020-03-19 16:40:56: 5: crsupgrade crsupgrade.pm 603 crsupgrade::CRSUpgrade
2020-03-19 16:40:56: 6: main rootcrs.pl 364 crsupgrade::new
2020-03-19 16:40:56: ####### End DIE Stack Trace #######

2020-03-19 16:40:56: ROOTCRS_OLDHOMEINFO checkpoint has failed

 

 

Cause

To view full details, sign in with your My Oracle Support account.

Don't have a My Oracle Support account? Click to get started!


In this Document
Symptoms
Cause
Solution
References


My Oracle Support provides customers with access to over a million knowledge articles and a vibrant support community of peers and Oracle experts.