ReadWriteBackingMap Configured With Write Behind Can Lead to an OutOfMemoryException With Sporadic Bulk Cache Updates (Doc ID 1360070.1)

Last updated on NOVEMBER 03, 2016

Applies to:

Oracle Coherence - Version: 3.6.1.2 and later   [Release: AS10g and later ]
Information in this document applies to any platform.

Goal

This article aims to explain the circumstances that can lead to an OutOfMemoryError when a cache with a Read Write Backing Map, configured to use write-behind, is subjected to sporadic bulk updates, followed immediately by periods of inactivity. 

In this scenario, when the write-behind timeout is reached for the bulk changes, that generates internal events for each change, that are placed on an internal event queue.  This event queue is only flushed asynchronously when there is further activity in the cache. 

Therefore, if there is a short period of heavy activity followed by a period of calm where there's no further activity in the cache, then when the write-behind timeouts trigger, a large number of events are queued up, potentially leading to an OutOfMemoryError if there is insufficient free space on the Java Virtual Machine's (JVM) heap to contain the events. 

The error can look similar to the following, though note that this issue is not dependent on using POF:

2011-09-01 13:10:03.678/2052407.745 Oracle Coherence GE 3.6.1.2 <Error> (thread=DistributedCache:DistributedCustomerPofCache, member=11): Terminating PartitionedCache due to unhandled exception: java.lang.OutOfMemoryError

The best way to see if this problem is present is to capture a java heap histogram, for example using the command:

jmap -histo <jvm process id>

and see if you have a large number of com.tangosol.util.internal.BMEventFabric$EventHolder objects on the heap. 

For example:

$ jmap -histo 16824 | egrep  '#bytes|com.tangosol.util.internal.BMEventFabric$EventHolder'
num    #instances  #bytes  class name
26     12522       5862680 com.tangosol.util.internal.BMEventFabric$EventHolder

However, you've only have a binary heap dump, for example if you are using Sun Java and have -XX:+HeapDumpOnOutOfMemoryError as one of your cache node's java arguments, then using a third part tool such as the Eclipse Memory Analyzer will allow you to generate a histogram from the binary dump.

Solution

Sign In with your My Oracle Support account

Don't have a My Oracle Support account? Click to get started

My Oracle Support provides customers with access to over a
Million Knowledge Articles and hundreds of Community platforms