DBRM (OSPID: XXXX) Waits For Event 'LATCH: SHARED POOL' For XXX Secs
(Doc ID 2673443.1)
Last updated on JULY 20, 2024
Applies to:
Oracle Database - Enterprise Edition - Version 18.1.0.0.0 and laterInformation in this document applies to any platform.
Symptoms
DBRM was stuck/hung resulting in RAC instance termination
From Database instance alert log <ORACLE_BASE>/diag/rdbms/<DBNAME>/<SID>/trace/alert_<SID>.log
Wed May 13 03:13:35 2020
ARC2 (PID:11910): Archived Log entry 91654 added for T-1.S-43188 ID 0xb969d1cb LAD:1
Wed May 13 03:14:05 2020
kkjcre1p: unable to spawn jobq slave process
Wed May 13 03:14:05 2020
Errors in file /oracle/base/diag/rdbms/<DBNAME>/<DBNAME>1/trace/<DBNAME>1_cjq0_11961.trc:
Wed May 13 03:14:17 2020
DBRM (ospid: 6357) waits for event 'latch: shared pool' for 77 secs.
DBRM (ospid: 6357) waits for latch 'shared pool' for 77 secs.
...
Wed May 13 03:14:57 2020
DBRM (ospid: 6357) waits for event 'latch: shared pool' for 117 secs.
DBRM (ospid: 6357) waits for latch 'shared pool' for 117 secs.
Wed May 13 03:15:01 2020
DIA0 Critical Database Process Blocked: Hang ID 109 blocks 17 sessions
Final blocker is session ID 632 serial# 39157 OSPID 6395 on Instance 1
If resolvable, process termination will be attempted by Hang Manager
Errors in file /oracle/base/diag/rdbms/<DBNAME>/<DBNAME>1/trace/<DBNAME>1_dia0_6374.trc (incident=736222):
ORA-32701: Possible hangs up to hang ID=109 detected
Incident details in: /oracle/base/diag/rdbms/<DBNAME>/<DBNAME>1/incident/incdir_736222/<DBNAME>1_dia0_6374_i736222.trc
Wed May 13 03:15:03 2020
DIA0 terminating instance 1 (local instance) due to a LOCAL, HIGH confidence Hang with ID=109.
Critical background process blocked in a branched chain.
Hang Resolution Reason: This hang is being resolved due to degrading global
hang metrics and the root is a Critical background process.
Informed instance 2 to perform instance termination of master instance 1.
Wed May 13 03:15:03 2020
LMON received an instance eviction notification from instance 2
The instance eviction reason is 0x3
The instance eviction map is 1
Wed May 13 03:15:03 2020
LMD0 (ospid: 6378) received an instance eviction notification from instance 2 [3]
Wed May 13 03:15:04 2020
PMON (ospid: 6311): terminating the instance due to ORA error 495
Cause - 'Instance is being terminated due to fatal process death (pid: 7, ospid: 6335, GEN0)'
Wed May 13 03:15:04 2020
System state dump requested by (instance=1, osid=6311 (PMON)), summary=[abnormal instance termination]. error - 'Instance is terminating.
'
System State dumped to trace file /oracle/base/diag/rdbms/<DBNAME>/<DBNAME>1/trace/<DBNAME>1_diag_6352_20200513031504.trc
Wed May 13 03:15:06 2020
License high water mark = 1805
Wed May 13 03:15:10 2020
Instance terminated by PMON, pid = 6311
2020-05-13T03:15:11.283025+02:00
Warning: 2 processes are still attacheded to shmid 5439508:
(size: 69632 bytes, creator pid: 6028, last attach/detach pid: 6382)
Wed May 13 03:15:11 2020
USER (ospid: 19014): terminating the instance
Wed May 13 03:15:11 2020
Instance terminated by USER, pid = 19014
From DIA0 incident trace file <ORACLE BASE>/diag/rdbms/<DBNAME>/<SID>/incident/<DBNAME>1_dia0_*.trc
*** 2020-05-13 03:15:01.606
Inst Sess Ser Proc Wait Wait
Num ID Num OSPID Name Time(s) Event
----- ------ ----- --------- ----- ------- -----
1 128 55282 6436 MMON 124 latch: shared pool
1 632 39157 6395 RMS0 125 not in wait
service name: SYS$BACKGROUND
Current Wait Stack:
Not in wait; last wait ended 2 min 8 sec ago
There are 21 sessions blocked by this session.
Dumping one waiter:
inst: 1, sid: 128, ser: 55282
wait event: 'latch: shared pool'
p1: 'address'=0x6088e200
p2: 'number'=0x250
p3: 'why'=0x608f7908
row_wait_obj#: 4294967295, block#: 0, row#: 0, file# 0
min_blocked_time: 71 secs, waiter_cache_ver: 65447
Wait State:
fixed_waits=0 flags=0x20 boundary=(nil)/-1
Session Wait History:
elapsed time of 2 min 8 sec since last wait
0: waited for 'rdbms ipc message'
timeout=0x12c, =0x0, =0x0
wait_id=48181057 seq_num=39117 snap_id=1
wait times: snap=2.999906 sec, exc=2.999906 sec, total=2.999906 sec
wait times: max=3.000000 sec
wait counts: calls=1 os=1
occurred after 0.000091 sec of elapsed time
1: waited for 'rdbms ipc message'
timeout=0x12c, =0x0, =0x0
wait_id=48181056 seq_num=39116 snap_id=1
wait times: snap=2.999026 sec, exc=2.999026 sec, total=2.999026 sec
wait times: max=3.000000 sec
wait counts: calls=1 os=1
occurred after 0.000013 sec of elapsed time
2: waited for 'KSV master wait'
indicator=0x9, =0x0, =0x0
wait_id=48181055 seq_num=39115 snap_id=1
wait times: snap=0.000039 sec, exc=0.000039 sec, total=0.000039 sec
wait times: max=infinite
..
(latch info) hold_bits=0x80 ud_influx=0x2e2
Holding:
0x6088e200 Child 'shared pool' (level=7, SGA latch, child#=6) >>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Holder Location: kgh.h LINE:9508 ID:kghfrunp: alloc: wait:
Holder Slot: slot=7, efd=7, pdb=0
Holder Context: 0
Latch State:
state=busy [value=0x1d]
holder orapid=29, ospid=6395
wlstate=free value=0
waiters [orapid (seconds since: put on list, posted, alive check)]:
164 (129, *, 0)
49 (128, *, 29)
180 (126, *, 93)
15 (124, *, 0)
Changes
No changes
Cause
To view full details, sign in with your My Oracle Support account. |
|
Don't have a My Oracle Support account? Click to get started! |
In this Document
Symptoms |
Changes |
Cause |
Solution |
References |