ORA-4031 due to Huge "ges resource dynamic" and "ges enqueues" and Instance Terminates: ORA-29770: 'SGA: allocation forcing component growth' (Doc ID 1951758.1)

Last updated on APRIL 15, 2017

Applies to:

Oracle Database - Enterprise Edition - Version 12.1.0.2 and later
Information in this document applies to any platform.

Symptoms

12c RAC database instance termination: 

Symptom set 1 - ORA-4031

Database alert.log

Errors in file /u01/app/diag/rdbms/prddb/PRDDB1/trace/PRDDB1_ppa7_49564.trc (incident=120426):
ORA-04031: unable to allocate 12312 bytes of shared memory ("shared pool","unknown object","KKSSP^231","kglseshtTable")
Incident details in: /u01/app/diag/rdbms/prddb/PRDDB1/incident/incdir_120426/PRDDB1_ppa7_49564_i120426.trc

Aforementioned incident file:

TOP 20 MEMORY USES ACROSS SGA HEAP 1 - 4
----------------------------------------------
"ges resource dynamic " 14 GB 46%
"ges enqueues " 7475 MB 25%

==============================================
TOP 20 MAXIMUM MEMORY USES ACROSS SGA HEAP 1 - 4
----------------------------------------------
"ges resource dynamic " 27 GB 40%
"ges enqueues " 15 GB 21%

 

Symptom set 2 - ORA-29770: waits for event 'SGA: allocation forcing component growth'

Database alert.log

LMD0 (ospid: 31458) waits for event 'SGA: allocation forcing component growth' for 88 secs.
Errors in file /u01/app/diag/rdbms/prddb/PRDDB2/trace/PRDDB2_lmhb_31474.trc (incident=120219):
ORA-29770: global enqueue process LMD0 (OSID 31458) is hung for more than 70 seconds
Incident details in: /u01/app/diag/rdbms/prddb/PRDDB2/incident/incdir_120219/PRDDB2_lmhb_31474_i120219.trc

Aforementioned incident file:

LMD0 (ospid: 31458) has no heartbeats for 89 sec. (threshold 70)
: heartbeat state 0x1.ffff (inwait) pso-flag 0x100
: waiting for event 'SGA: allocation forcing component growth' for 88 secs with wait_id 69252822.
===[ Wait Chain ]===
LMD0 (ospid: 31458) waits for event 'SGA: allocation forcing component growth'.
MMAN (ospid: 31440) waits for event 'enq: PE - contention'.


Short stack dump:
ksedsts()+572<-ksdxfstk()+44<-ksdxcb()+1004<-sspuser()+240<-ssprtmin()+188<-__sighndlr()+12<-call_user_handler()+868<-sigacthandler()+92<-_syscall6()+32<-sskgpwwait()+232<-skgpwwait()+192<-ksliwat()+2112<-kslwaitctx()+108<-kjusuc()+7872<-ksipgetctxia()+5876<-ksqcmi()+35184<-ksqgtlctx()+5928<-ksqgelctx()+760<-kspgetpeeq()+308<-kspset1()+1948<-kspset0()+60<-ksp_bg_param_update()+320<-kmgs_update_param_auto()+1104<-kmgsdrv()+10720<-ksbabs()+1456<-ksbrdp()+1384<-opirip()+904<-opidrv()+648<-sou2o()+112<-opimai_real()+276<-ssthrdmain()+456<-main()+320<-_start()+300

 

Common symptoms:

Before the issue happened, the following queries show millions of DX/BB enqueues: 

SQL> select pool,name,round(bytes/1024/1024) ,con_id from v$sgastat where round(bytes/1024/1024)>10 and rownum<10 order by 3 desc;

POOL NAME ROUND(BYTES/1024/1024) CON_ID
------------ -------------------------- ---------------------- ----------
shared pool ges resource dynamic 11038 0
shared pool ges enqueues 6979 0


SQL> select * from v$resource_limit order by resource_name;

RESOURCE_NAME CURRENT_UTILIZATION MAX_UTILIZATION INITIAL_ALLOCATION LIMIT_VALUE CON_ID
------------------------------ ------------------- --------------- ---------------------------------------- ---------------------------------------- ----------
ges_cache_ress 5201044 5201044 0 UNLIMITED 0
ges_locks 7523332 7525172 47795 UNLIMITED 0
ges_ress 12737452 12737454 32728 UNLIMITED 0


SQL> select substr(RESOURCE_NAME,instr(RESOURCE_NAME,'[',1,3)+1,2),MASTER_NODE,count(*) from gv$ges_resource group by substr(RESOURCE_NAME,instr(RESOURCE_NAME,'[',1,3)+1,2),MASTER_NODE;

SUBSTR(R MASTER_NODE COUNT(*)
-------- ----------- ----------
DX 0 3788172
BB 0 3785418
DX 2 1331473
BB 2 1329750
BB 1 1299299
DX 1 1298632
BL 0 282434

 



Cause

Sign In with your My Oracle Support account

Don't have a My Oracle Support account? Click to get started

My Oracle Support provides customers with access to over a
Million Knowledge Articles and hundreds of Community platforms