Support for NoSQL Storage Node Failure by Allowing Multiple ECE NoSQL Configurations
Last updated on MAY 02, 2017
Applies to:Oracle Communications BRM - Elastic Charging Engine - Version 18.104.22.168.0 and later
Information in this document applies to any platform.
On : 22.214.171.124.0 version, Rating business logic
When the running NoSQL goes down, all ECE nodes continue to work OK. No issue.
If you need to restart any of ECE nodes, it gives an error, but it get started; and if any usage get rated via that ECE node, it will have problem.
The issue can be reproduced at will with the following steps:
1. Insatlled NoSQL 3x3 cluster, where Storage nodes (sn1, sn2 and s3) are deployed on ECE server (server1, server2 and server3) respectively.
2. Charging nodes (ecs1_s1, ecs1_s2, ecs1_s3) are started on ECE server (server1, server2 and server3) respectively.
3. Now bring down storage node sn1 on ECE server 1. ECE nodes 1, 2 and 3 will keep running without any issue.
Note: only sn1 is down, sn2 and sn3 are up and running
A) Restart of existing charging node (Ex: ecs1_s1 on ECE server 1) or start new charging node (Ex: ecs2_s1 on ECE server 1), this result in above error and it’s obvious that storage node sn1 is down.
B) When the sn1 node is down, restarting any charging nodes and usage request hits for account that belongs to ecs1_s1, the rated event will not be written into NoSQL, but it gets removed from RatedEvent after expiry delay.
This is due to the fact that charging nodes are up but there is connection failure.
C) Similarly, when restarting REF (ECE server 1), it results in error and the process will not be started.
Sign In with your My Oracle Support account
Don't have a My Oracle Support account? Click to get started
My Oracle Support provides customers with access to over a
Million Knowledge Articles and hundreds of Community platforms