Root.sh Exits On Second Node After GridSetup.sh -silent -switchGridHome And Successful First Node
(Doc ID 2846230.1)
Last updated on APRIL 17, 2023
Applies to:
Oracle Database - Enterprise Edition - Version 19.8.0.0.0 and laterInformation in this document applies to any platform.
Symptoms
root.sh on second node fails to do anything.
1. gridSetup.sh with CRS_SWONLY succeeded with 19.13 GI gold image. it copied software to second node and asked root.sh on each node.
2. gridSetup.sh -silent -switchGridHome ran and said to run root.sh on each node.
3. root.sh on first node ran successfully and stopped/started crs under new GI home
4. root.sh on second node simply exits without performing any work. crs still running on original GI home.
Customer got able to get permission to get X server application.
Used gridSetup.sh -switchGridHome without -silent,
saved GUI screens of what did and uploaded (in docx file)
Saved the response file (also in that zip) and did cancel.
Then re-did using the saved response file and got the same SEVERE error, and same problem with failure to propagate rootconfig.sh script to other node (attaching both):
[oracle@hostname_vm01 1913_2110]$ ./gridSetup.sh -silent -switchGridHome -responseFile /home/oracle/switchGridHome_1913_2110.rsp -waitForCompletion
Launching Oracle Grid Infrastructure Setup Wizard...
As a root user, execute the following script(s):
1. /u09/app/oracle/crs/1913_2110/root.sh
Execute /u09/app/oracle/crs/1913_2110/root.sh on the following nodes:
[hostname_vm01, hostname_vm02]
Run the scripts on the local node first. After successful completion, run the scripts in sequence on all other nodes.
You can find the log of this install session at:
/u01/app/oraInventory/logs/UpdateNodeList2021-12-15_03-13-56PM.log
SEVERE:Remote 'UpdateNodeList' failed on nodes: 'hostname_vm01'. Refer to '/u01/app/oraInventory/logs/UpdateNodeList2021-12-15_03-13-56PM.log' for details.
It is recommended that the following command needs to be manually run on the failed nodes:
/u09/app/oracle/crs/19900/oui/bin/runInstaller -updateNodeList -setCustomNodelist -noClusterEnabled ORACLE_HOME=/u09/app/oracle/crs/19900 "CLUSTER_NODES={hostname_vm01,hostname_vm02}" "NODES_TO_SET={hostname_vm01,hostname_vm02}" CRS=false -invPtrLoc "/etc/oraInst.loc" LOCAL_NODE=<node on which command is to be run>.
Please refer 'UpdateNodeList' logs under central inventory of remote nodes where failure occurred for more details.
[WARNING] [INS-10016] Installer failed to update the cluster related details, for this Oracle home, in the inventory on all/some of the nodes
ACTION: You may chose to retry the operation, without continuing further. Alternatively you can refer to information given below and manually execute the mentioned commands on the failed nodes now or later to update the inventory.
*MORE DETAILS*
Execute the following command on node(s) [hostname_vm01]:
/u09/app/oracle/crs/19900/oui/bin/runInstaller -jreLoc /u09/app/oracle/crs/19900/jdk/jre -paramFile /u09/app/oracle/crs/19900/oui/clusterparam.ini -silent -ignoreSysPrereqs -updateNodeList -bigCluster ORACLE_HOME=/u09/app/oracle/crs/19900 CLUSTER_NODES=<Local Node> "NODES_TO_SET={hostname_vm01,hostname_vm02}" -invPtrLoc "/u09/app/oracle/crs/19900/oraInst.loc" -local -doNotUpdateNodeList
You can find the log of this install session at:
/u01/app/oraInventory/logs/UpdateNodeList2021-12-15_03-13-56PM.log
Successfully Setup Software.
[oracle@hostname_vm01 1913_2110]$ ssh -q hostname_vm02 "cat /u09/app/oracle/crs/1913_2110/crs/config/rootconfig.sh" | diff - /u09/app/oracle/crs/1913_2110/crs/config/rootconfig.sh
9,11c9,11
< PATCH_HOME=false
< MOVE_HOME=false
< TRANSPARENT_MOVE_HOME=false
---
> PATCH_HOME=true
> MOVE_HOME=true
> TRANSPARENT_MOVE_HOME=true
28c28
< SWONLY_MULTINODE=true
---
> SWONLY_MULTINODE=false
This proves defective operation beyond any doubt, response file or not.
Cause
To view full details, sign in with your My Oracle Support account. |
|
Don't have a My Oracle Support account? Click to get started! |
In this Document
Symptoms |
Cause |
Solution |
References |