Setting Coherence cluster-quorum-policy on ECE CNE
(Doc ID 2752484.1)
Last updated on FEBRUARY 18, 2021
Applies to:Oracle Communications BRM - Elastic Charging Engine - Version 188.8.131.52.0 and later
Information in this document applies to any platform.
Qn: In the current Cloud Native Environment (CNE), Elastic Charging Engine (ECE) does set Coherence cluster-quorum-policy based on number of ECS Pods.
In order to avoid data loss from ECE Coherence cache, the timeout-survivor-quorum should be defined to a relevant value. Can this be implemented on CNE implementation?
For example, having a setup (on-prem ECE) with 8 ECE VMs, 4 ECSes per VM (quorum = 28):
#Configuration snippet from charging-coherence-override-(dev|prod).xml:
- ECE Cache Empty After Network Issue & ECE Getting Restarted (Doc ID 2393855.1)
- Customer Information Missing In The ECE Cache (Doc ID 2223435.1)
To view full details, sign in with your My Oracle Support account.
Don't have a My Oracle Support account? Click to get started!
In this Document