Running/Accessible OID 11g Cluster Nodes (oid1/oid2) Showing as Down in EM | emagent.trc File Shows Error: ORA-28000: the account is locked
Last updated on MARCH 08, 2017
Applies to:Oracle Internet Directory - Version 11.1.1 and later
Information in this document applies to any platform.
Oracle Internet Directory (OID) 11g cluster (OID1/OID2) instances showing with down status in Enterprise Manager (EM) Fusion Middleware (FMW) Control console incorrectly, even though opmnctl status shows all processes are alive and connections are working.
The following appears in the emagent.trc file frequently:
2014-07-08 13:53:41,768 Thread-1218434816 ERROR vpxoci: ORA-28000: the account is locked
2014-07-08 13:53:41,768 Thread-1218434816 WARN vpxoci: Login 0xbc0248c0 failed, error=ORA-28000: the account is locked
2014-07-08 13:53:41,768 Thread-1218434816 ERROR engine: [oracle_ldap,/Farm_PRODDomain/ias_prod1/oid1,OIDserverSecRefSuLoginFailCC] : nmeegd_GetMetricData failed : ORA-28000: the account is locked
Unlocking the ODSSM password and following all the steps from the documentation does not resolve the issue.
Tried stopping everything on both nodes, killed any processes left, and unlocked the ODSSM account again. Then started Nodemanager, then WebLogic Server, then the OID and OVD processes manually:
opmnctl startproc ias-component=oid1
opmnctl startproc ias-component=ovd1
At this stage, ODSSM is not locked. After starting the EMAgent:
opmnctl startproc ias-component=EMAGENT
The ODSSM can lock right away, or lock after a couple of hours.
And/Or if starting node2 after and like node1, after few seconds the ODSSM account locks.
Tried other known MOS Notes/issues to no help.
Followed the documentation again, but making sure the EMAgent was completely down on both nodes before starting the steps (reference <Document 1380036.1>), and also updated opmn registration and DIP on node2 afterwards. After starting the EMAgent very last, on node1 ODSSM was not getting locked this time, but after stopping and restarting everything, ODSSM locked again.
Noticed that the emagent.trc file referenced node2 even though it was down at the time:
2014-08-21 13:53:29,538 Thread-2949637888 ERROR engine: [oracle_ovd,/Farm_PRODDomain/ias_prod2/ovd2,getUsers] : nmeegd_GetMetricData failed : Error communicating with server.
2014-08-21 13:53:29,538 Thread-2949637888 WARN collector: Error exit. Error message: Error communicating with server.
2014-08-21 13:53:31,574 Thread-2949637888 WARN vpxoci: OCI Error -- ErrorCode(1017): ORA-01017: invalid username/password; logon denied
SQL = " OCISessionGet"...
LOGIN = odssm/@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=prod-oid-cluster.mycompany.com)(PORT=1521))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=proid.mycompany.com)))
2014-08-21 13:53:31,574 Thread-2949637888 ERROR vpxoci: ORA-01017: invalid username/password; logon denied
2014-08-21 13:53:31,574 Thread-2949637888 WARN vpxoci: Login 0xb4145e20 failed, error=ORA-01017: invalid username/password; logon denied
2014-08-21 13:53:31,574 Thread-2949637888 ERROR engine: [oracle_ldap,/Farm_PRODDomain/ias_prod2/oid2,OIDserverEntryCacheHitRatioCC] : nmeegd_GetMetricData failed : ORA-01017: invalid username/password; logon denied
Next, shut down everything down on node1 and node2. Restarted everything up on node1, and ODSSM account was not locked.
Started node2, and few minutes later the account was locked and the emagent.trc file shows the error but this time from node1:
2014-08-21 13:41:20,346 Thread-4104398592 ERROR engine: [oracle_ovd,/Farm_PRODDomain/ias_prod1/ovd1,getUsers] : nmeegd_GetMetricData failed : Error communicating with server.
2014-08-21 13:41:20,346 Thread-4104398592 WARN collector: Error exit. Error message: Error communicating with server.
2014-08-21 13:43:11,490 Thread-4101248768 WARN vpxoci: OCI Error -- ErrorCode(1017): ORA-01017: invalid username/password; logon denied
So the stop/restart seems to trigger the account locking as well.
NOTE: For OID 126.96.36.199.0, also tried <Patch 13490778> (referenced in <Document 1613585.1> and other related Notes). It applied successfully on node1, but it did not help with the problem. The same patch fails on node2 with error: "Patch "13490778" is not needed since it has no fixes for this Oracle Home", even though an "opatch lsinventory -detail" output from both node's $FMW_Home/oracle_common homes shows the same products and versions are installed.
Sign In with your My Oracle Support account
Don't have a My Oracle Support account? Click to get started
My Oracle Support provides customers with access to over a
Million Knowledge Articles and hundreds of Community platforms