EXACS:GI precheck fails with error 'ahfctl statusahf script is failed/unexecuted on nodes : [<NODE2>].'
(Doc ID 2887851.1)
Last updated on NOVEMBER 16, 2022
Applies to:
Oracle Cloud Infrastructure - Exadata Cloud Service - Version N/A and laterInformation in this document applies to any platform.
Symptoms
Applying GI RU (19.15 to 19.16)and pre-check fails with below errors.
//From pilot_2022-07-27_02-51-18-PM
Execution status of succeeded node:<NODE1>>>>>>>>>>>>>>>>>>>>>>>>>in Node 1 (<NODE1>) ahf status command is success and in node 2 it fails as below.
INFO: [2022-07-27 14:51:41.435 CEST][pool-1-thread-1][DefaultCommandExecutor.run:371] ********Overall result for command execution:]0;<NODE2>.exacli.prod1fra.oraclevcn.com^G/opt/oracle.ahf/bin/ahfctl statusahf ******
INFO: [2022-07-27 14:51:41.436 CEST][pool-1-thread-1][DefaultCommandExecutor.run:372] Execution of ]0;<NODE2>.exacli.prod1fra.oraclevcn.com^G/opt/oracle.ahf/bin/ahfctl statusahf script is failed/unexecuted on nodes : [<NODE2>]
Execution status of failed node:<NODE2>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Errors : [bash: ]0: command not found
, bash: <NODE2>.exacli.prod1fra.oraclevcn.com^G/opt/oracle.ahf/bin/ahfctl: No such file or directory>>>>>>>>>>>>>>>>>>>>>>>>>AHF status failure in node 2(<NODE2>)
]
Standard output : ^[]0;<NODE2>.exacli.prod1fra.oraclevcn.com^G
Execution status of <NODE2> is:true
Execution exit code of <NODE2> is:127
NFO: [2022-07-27 14:51:41.436 CEST][pool-1-thread-1][DefaultCommandExecutor.run:373] ********End of overall result for command execution******
FINE: [2022-07-27 14:51:41.447 CEST][pool-1-thread-1][Resource.getString:197] Can't find resource for bundle oracle.dbcloud.common.lib.resource.CommonErrorResID, key oracle.dbcloud.common.lib.resource.CommonErrorCode.COMMAND_EXECUTION_FAILED_ON_NODES_ERR.cause
//
[root@<NODE1> opc]# tfactl print status
.---------------------------------------------------------------------------------------------------------.
| Host | Status of TFA | PID | Port | Version | Build ID | Inventory Status |
+------------------+---------------+--------+------+------------+----------------------+------------------+
| <NODE2> | RUNNING | 226666 | 5000 | 22.2.0.0.0 | 22200020220707070249 | COMPLETE |
'------------------+---------------+--------+------+------------+----------------------+------------------'
[root@<NODE2> opc]# tfactl print config
//From gi1916precheckfail_node1.txt
---------------------------------------------------------------------------------------------------------.
| Host | Status of TFA | PID | Port | Version | Build ID | Inventory Status |
+------------------+---------------+--------+------+------------+----------------------+------------------+
| <NODE1> | RUNNING | 363003 | 5000 | 22.2.0.0.0 | 22200020220707070249 | COMPLETE |
'------------------+---------------+--------+------+------------+----------------------+------------------'
In both the cases AHF is running as standalone only not in cluster mode.
root@<NODE1> ~]# tfactl print hosts
Host Name : <NODE1>>>>>>>>>>>>>>>>>>>>>>>>>listing only one node.
[root@<NODE1> ~]# ahfctl statusahf
.---------------------------------------------------------------------------------------------------------.
| Host | Status of TFA | PID | Port | Version | Build ID | Inventory Status |
+------------------+---------------+--------+------+------------+----------------------+------------------+
| <NODE1> | RUNNING | 363003 | 5000 | 22.2.0.0.0 | 22200020220707070249 | COMPLETE |
'------------------+---------------+--------+------+------------+----------------------+------------------'
++Even TFA syncnodes also doesn't helps here.
//Observed TFA ports connection failure and this causes failure with remote TFA connection.
[root@<NODE1> internal]# curl -v telnet://<NODE1>:5000
* About to connect() to <NODE1> port 5000 (#0)
* Trying 10.33.5.45...
* Connected to <NODE1> (10.33.5.45) port 5000 (#0)
[root@<NODE1> internal]# curl -v telnet://<NODE2>:5000
* About to connect() to <NODE2> port 5000 (#0)
* Trying 10.33.5.46...
* Connection timed out
* Failed connect to <NODE2>:5000; Connection timed out
* Closing connection 0
curl: (7) Failed connect to <NODE2>:5000; Connection timed out>>>>>>>Not able to connect to remote node with port #5000
[root@<NODE1> internal]#
Cause
To view full details, sign in with your My Oracle Support account. |
|
Don't have a My Oracle Support account? Click to get started! |
In this Document
Symptoms |
Cause |
Solution |