My Oracle Support Banner

Coherence OutOfMemoryError Exception Triggered by High EntryProcessor Load During a Failover (Doc ID 1424173.1)

Last updated on AUGUST 21, 2018

Applies to:

Oracle Coherence - Version 3.7.1 and later
Information in this document applies to any platform.
***Checked for relevance on 22-Aug-2013***
***Checked for relevance on 21-Aug-2018***

Symptoms

A Coherence storage node reports a Java OutOfMemoryError.  The applications using the cluster make use of entry processors and at recently one or more cluster storage nodes have been restarted for operational reasons, that is they've not failed abnormally before the OutOfMemoryError occurs. 

Configuring the Coherence nodes to produce a heap dump when the OutOfMemoryError, by setting the following Java runtime flags:

-XX:+HeapDumpOnOutOfMemoryError
-XX:+HeapDumpPath=<node-specific-path>


Then running a "Leak Suspects" report on the heap dump in the Eclipse Memory Analysis Tool (MAT) reports the following information as the suspect:

HeapDump 1:
+++++++++++
One instance of "com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache" loaded by "sun.misc.Launcher$AppClassLoader @ 0xe2d89210" occupies 212,533,792 (73.08%) bytes. The memory is accumulated in one instance of "com.tangosol.util.SparseArray$ObjectNode" loaded by "sun.misc.Launcher$AppClassLoader @ 0xe2d89210".


Looking closer at the PartitionedCache instance shows that the SparseArray corresponds to the field:

__m_PendingRequestInfo

Changes

This issue isn't seen in 3.7.0 or earlier releases and potentially can occur after upgrading to 3.7.1.

Cause

To view full details, sign in with your My Oracle Support account.

Don't have a My Oracle Support account? Click to get started!


My Oracle Support provides customers with access to over a million knowledge articles and a vibrant support community of peers and Oracle experts.