My Oracle Support Banner

CJQ Job Queue Process Consume High CPU (Doc ID 1638277.1)

Last updated on SEPTEMBER 24, 2021

Applies to:

Oracle Database - Enterprise Edition - Version 11.2.0.1 to 11.2.0.3 [Release 11.2]
Oracle Database - Enterprise Edition - Version 11.2.0.4 to 11.2.0.4 [Release 11.2]
Oracle Database Cloud Schema Service - Version N/A and later
Gen 1 Exadata Cloud at Customer (Oracle Exadata Database Cloud Machine) - Version N/A and later
Oracle Cloud Infrastructure - Database Service - Version N/A and later
Information in this document applies to any platform.

Symptoms

Alert log shows message "ORA-32701: Possible hangs up to hang ID=59 detected".

Snippet from alert log :-

Errors in file /u01/app/oracle/diag/rdbms/<instance_name>/<instance_name>/trace/D1632P2_dia0_19566.trc  (incident=393636):
ORA-32701: Possible hangs up to hang ID=0 detected
Incident details in: /u01/app/oracle/diag/rdbms/<instance_name>/<instance_name>/incident/incdir_393636/D1632P2_dia0_19566_i393636.trc
DIA0 terminating blocker (ospid: 29909 sid: 264 ser#: 3) of hang with ID = 59
   requested by master DIA0 process on instance 1
   Hang Resolution Reason: Although hangs of this root type are typically
   self-resolving, the previously ignored hang was automatically resolved.
   by terminating session sid: 264 ospid: 29909
Sat Feb 15 23:59:35 2014
Sweep [inc][393636]: completed
Sweep [inc2][393636]: completed
DIA0 successfully terminated session sid:264 ospid:29909 with status 31.

 

 

Incident trace file shows the below :-

 

Resolvable Hangs in the System
                     Root       Chain Total               Hang              
  Hang Hang          Inst Root  #hung #hung  Hang   Hang  Resolution        
    ID Type Status   Num  Sess   Sess  Sess  Conf   Span  Action            
 ----- ---- -------- ---- ----- ----- ----- ------ ------ -------------------
    59 HANG RSLNPEND    2   264     3     6   HIGH  LOCAL Terminate Process     <<<<<<<< session with sid 264 causing the blocking
 Hang Resolution Reason: Although hangs of this root type are typically
   self-resolving, the previously ignored hang was automatically resolved.

     inst# SessId  Ser#     OSPID PrcNm Event
     ----- ------ ----- --------- ----- -----
         1    851  6687     25687    FG row cache lock
         2     73  6739     27189  CJQ0 enq: JS - queue lock
         2    264     3     29909  CJQ0 not in wait                         <<<<<<<<<<<<<  CJQ0 process causing the hang

*** 2014-02-15 23:59:33.901
Process diagnostic dump for oracle@dm04db02.db.gen.local (CJQ0), OS id=29909,
pid: 52, proc_ser: 2, sid: 264, sess_ser: 3
-------------------------------------------------------------------------------
os thread scheduling delay history: (sampling every 1.000000 secs)
 0.000000 secs at [ 23:59:33 ]
   NOTE: scheduling delay has not been sampled for 0.333174 secs  0.000000 secs from [ 23:59:29 - 23:59:34 ], 5 sec avg
 0.000000 secs from [ 23:58:34 - 23:59:34 ], 1 min avg
 0.000000 secs from [ 23:54:33 - 23:59:34 ], 5 min avg
loadavg : 40.05 53.50 34.35
Memory (Avail / Total) = 12043.22M / 145218.77M
Swap (Avail / Total) = 14248.75M /  24575.99M
F S UID        PID  PPID  C PRI  NI ADDR SZ WCHAN  STIME TTY          TIME CMD
0 D oracle   29909     1  1  80   0 - 218435 sync_p Jan09 ?       16:23:51 ora_cjq0_D1632P2
Short stack dump:
ksedsts()+461<-ksdxfstk()+32<-ksdxcb()+1876<-sspuser()+112<-__sighandler()<-kghrcappl()+245<-kghfrempty_ex()+133<-qesmmIPgaFreeCb()+350<-ksu_dispatch_tac()+1591<-qerfxFetch()+4614<-rwsfcd()+103<-qerjotFetch()+461<-qerjoFetch()+945<-rwsfcd()+103<-qerjotFetch()+461<-qerjotFetch()+461<-rwsfcd()+103<-qerjotFetch()+461<-rwsfcd()+103<-qerjotFetch()+461<-qerflFetchOutside()+101<-qeruaFetch()+574<-rwsfcd()+103<-qeruaFetch()+574<-qervwFetch()+139<-qersoProcessULS()+203<-qersoFetch()+5968<-opifch2()+2995<-opifch()+64<-opiodr()+916<-kpoodr()+653<-upirtrc()+2497<-kpurcsc()+98<-kpufch0()+1978<-kpufch()+1519<-OCIStmtFetch()+15<-jskqJobQReadDisk()+2718<-jskqJobQRefresh()+1522<-jscrgq_refresh_generic_q()+323<-jscrs_select0()+1254<-jscrs_select()+614<-rpiswu2()+1618<-kkjcjexe()+721<-kkjssrh()+561<-ksbcti()+513<-ksbabs()+1735<-ksbrdp()+971<-opirip()+623<-opidrv()+603<-sou2o()+103<-opimai_real()+266<-ssthrdmain()+252<-main()+201<-__libc_start_main()+244<-_start()+36

-------------------------------------------------------------------------------
Process diagnostic dump actual duration=0.760000 sec
 (max dump time=15.000000 sec)


current sql: select OBJOID,  CLSOID, RUNTIME, PRI, JOBTYPE,  SCHLIM,  WT, INST,  RUNNOW, ENQ_SCHLIM from ( select a.obj# OBJOID, a.class_oid CLSOID,    decode(bitand(a.flags, 16384), 0, a.next_run_date,           a.last_enabled_time) RUNTIME,    (2*a.priority +     decode(bitand(a.job_status, 4), 0, 0,            decode(a.running_ 

 

Changes

 

Cause

To view full details, sign in with your My Oracle Support account.

Don't have a My Oracle Support account? Click to get started!


In this Document
Symptoms
Changes
Cause
Solution
References


My Oracle Support provides customers with access to over a million knowledge articles and a vibrant support community of peers and Oracle experts.