My Oracle Support Banner

LCK temporarily stuck waiting for latch 'Child row cache objects' (Doc ID 843638.1)

Last updated on MARCH 07, 2019

Applies to:

Oracle Database - Enterprise Edition - Version 10.2.0.1 to 11.2.0.2 [Release 10.2 to 11.2]
Oracle Database Cloud Schema Service - Version N/A and later
Oracle Database Exadata Cloud Machine - Version N/A and later
Oracle Cloud Infrastructure - Database Service - Version N/A and later
Oracle Database Backup Service - Version N/A and later
Information in this document applies to any platform.

Symptoms

One RAC instance hangs from time to time. The hang resolves itself after a while until it hangs again after some time.

1. alert.log info

=> alert log of the hanging instance show on one node that the LCK process is
blocked on different enqueues and hanging messages are reported afterwards
(e.g. ORA-3136 and  WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK)

LCK0 started with pid=10, OS id=7217
Wed Feb 11 12:25:36 2009
...
Mon Feb 16 19:01:24 2009
GES: Potential blocker (pid=7217) on resource IV-0051D934-1013071D;
 enqueue info in file /u01/oracle/admin/RACDB/bdump/RACDB2_lmd0_7006.trc and
DIAG trace file
Mon Feb 16 19:15:59 2009
GES: Potential blocker (pid=7217) on resource CI-00000046-00000002;
 enqueue info in file /u01/oracle/admin/RACDB/bdump/RACDB2_lmd0_7006.trc and
DIAG trace file
...
Mon Feb 16 19:26:16 2009
> />> WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK! pid=25
System State dumped to trace file
/u01/oracle/admin/RACDB/bdump/RACDB2_mmon_7032.trc
...
Tue Feb 16 19:35:22 2009
WARNING: inbound connection timed out (ORA-3136)

=>  alert.log on the other nodes show CI crossinstance messages locks

Mon Feb 16 19:27:53 2009
GES: Potential blocker (pid=13607) on resource CI-00000046-00000005;
 enqueue info in file /u01/oracle/admin/RACDB/udump/RACDB1_ora_8418.trc and
DIAG trace file

2. Systemstate dumps show the LCK process is blocked in the 'Child row cache objects' latch
and plenty of processes are waiting on it (waiter count high). The LCK stack contains
terms like 'sskgpwwait kslges kqrbip kqrbfr'.

     (latch info) wait_event=0 bits=0
         Location from where call was made: kqrbip:
       waiting for 5795df1a0 Child row cache objects level=4 child#=3
         Location from where latch is held: kghfrunp: clatch: wait:
         Context saved from call: 0
         state=busy, wlstate=free
           waiters [orapid (seconds since: put on list, posted, alive check)]:
            249 (1, 1234808779, 1)
 ...
            161 (1, 1234808779, 1)
            waiter count=34
           gotten 108711869 times wait, failed first 40623351 sleeps 1869257
           gotten 14091082 times nowait, failed: 43699893
         possible holder pid = 255 ospid=3663
       on wait list for 5795df1a0

3. awrrpt info

=> After the hang, the awrrpt show plenty of 'KQR L PO' memory is freed during the hang

Pool   Name                                 Begin MB         End MB  % Diff
------ ------------------------------ -------------- -------------- -------
shared KQR L PO                              2,118.6        1,005.9  -52.52
shared free memory                             932.9        2,054.4  120.21

=> The rowcache (dictionary cache, the dc_*) usage decrease, too

dc_histogram_defs                 332,935
dc_object_ids                      57,894
dc_objects                        160,639
dc_segments                       382,909

=> big cursors are entering the shared pool regularly

Changes

 

Cause

To view full details, sign in with your My Oracle Support account.

Don't have a My Oracle Support account? Click to get started!


In this Document
Symptoms
Changes
Cause
Solution
References

My Oracle Support provides customers with access to over a million knowledge articles and a vibrant support community of peers and Oracle experts.