My Oracle Support Banner

Setting Coherence cluster-quorum-policy on ECE CNE (Doc ID 2752484.1)

Last updated on FEBRUARY 18, 2021

Applies to:

Oracle Communications BRM - Elastic Charging Engine - Version 12.0.0.3.0 and later
Information in this document applies to any platform.

Goal

Qn: In the current Cloud Native Environment (CNE), Elastic Charging Engine (ECE) does set Coherence cluster-quorum-policy based on number of ECS Pods.

In order to avoid data loss from ECE Coherence cache, the timeout-survivor-quorum should be defined to a relevant value. Can this be implemented on CNE implementation?

In case of on-prem solution, the formula is: timeout-survivor-quorum = (number of hosts - 1) * (number of ECSes per host), where "host" is a VM or a metal server.
For example, having a setup (on-prem ECE) with 8 ECE VMs, 4 ECSes per VM (quorum = 28):
#Configuration snippet from charging-coherence-override-(dev|prod).xml:
 <cluster-quorum-policy>
 <timeout-survivor-quorum role="OracleCommunicationBrmChargingServerChargingLauncher">28</timeout-survivor-quorum>
 </cluster-quorum-policy>


 
References:
- ECE Cache Empty After Network Issue & ECE Getting Restarted (Doc ID 2393855.1)
- Customer Information Missing In The ECE Cache (Doc ID 2223435.1)
 

Solution

To view full details, sign in with your My Oracle Support account.

Don't have a My Oracle Support account? Click to get started!


In this Document
Goal
Solution
References


My Oracle Support provides customers with access to over a million knowledge articles and a vibrant support community of peers and Oracle experts.