Excessive SHARED POOL latch waits after upgrading from 12c to 19c under Oracle Supercluster and Solaris system
(Doc ID 2877155.1)
Last updated on SEPTEMBER 22, 2022
Applies to:Oracle Database - Enterprise Edition - Version 220.127.116.11.0 and later
Oracle Solaris on SPARC (64-bit)
Many jobs run via Job queue processes concurrently.
Each job queue process also runs some queries on GV$ or DBA_ objects from 1 of 2 nodes thereby uses PQ slaves and causes excessive DA enqueue waits.
Removing related SQL from the application code helped to alleviate DA enqueue waits but did not reduce shared pool latch waits. This leads to SLA misses for important and higher workload job runs.
Flush shared_pool once a day keeps the shared pool latch waits for,say, 16 hours or so but still the SLA misses continues.
Top 10 Foreground wait section of AWR reports from slower performance period will show "latch: shared pool" wait event with high average wait time.
Top 10 Foreground Events by Total Wait Time
Event Waits Total Wait Time (sec) Avg Wait % DB time Wait Class
DB CPU 835.8K 87.9
db file sequential read 71,045,058 79.3K 1.12ms 8.3 User I/O
latch: shared pool 152,963 7529.5 49.22ms .8 Concurrency
gc cr grant 2-way 21,511,898 7264.9 337.71us .8 Cluster
latch: cache buffers chains 1,310,569 6417.9 4.90ms .7 Concurrency
gc cr grant busy 8,617,951 4347.4 504.46us .5 Cluster
gc current grant busy 6,897,619 3593.8 521.02us .4 Cluster
gc buffer busy acquire 678,762 2166.8 3.19ms .2 Cluster
gc current grant 2-way 5,805,542 2099.5 361.63us .2 Cluster
enq: TX - index contention 671,546 1276 1.90ms .1 Concurrency
Only change made was upgrade of the database from 12c to 19c.
Same Oracle/Solaris Supercluster SPARC system was used under 19c.
Same Solaris 11 OS was used for Oracle 19c database release.
To view full details, sign in with your My Oracle Support account.
Don't have a My Oracle Support account? Click to get started!
In this Document