My Oracle Support Banner

Lots of cellirq process dbnode crashed (Doc ID 2272585.1)

Last updated on JANUARY 17, 2020

Applies to:

Oracle Exadata Storage Server Software - Version 12.1.2.3.4 and later
Information in this document applies to any platform.

Symptoms

Problem Summary
---------------------------------------------------
node02 experience swap utilization May 17th, lots of processes and crashed

Kernel version: 2.6.39-400.294.1.el6uek.x86_64 #1 SMP Wed Jan 11 08:46:38 PST 2017 x86_64

Image kernel version: 2.6.39-400.294.1.el6uek
Image version: 12.1.2.3.4.170111

swap utilization 100

[root@node02 ~]# ps -ef|grep cellirqbalance| wc -l
66925

Changes

There was cpu replaced recently on 13th.

[1] ExaWatcher

Observations

1.1. SwapFree went to 0

<05/17/2017 15:05:14> MemTotal: 793613544 kBMemFree: 3400776 kB SwapTotal:25165820 kB SwapFree:436 kB HugePages_Total: 262144 HugePages_Free:107711
<05/17/2017 15:05:19> MemTotal: 793613544 kBMemFree: 3587360 kB SwapTotal:25165820 kB SwapFree:440 kB HugePages_Total: 262144 HugePages_Free:107711
<05/17/2017 15:05:24> MemTotal: 793613544 kBMemFree: 3493208 kB SwapTotal:25165820 kB SwapFree:444 kB HugePages_Total: 262144 HugePages_Free:107711
<05/17/2017 15:05:29> MemTotal: 793613544 kBMemFree: 3575996 kB SwapTotal:25165820 kB SwapFree:476 kB HugePages_Total: 262144 HugePages_Free:107711
<05/17/2017 15:05:34> MemTotal: 793613544 kBMemFree: 3809812 kB SwapTotal:25165820 kB SwapFree: 0 kB HugePages_Total: 262144 HugePages_Free:107711
<05/17/2017 15:05:39> MemTotal: 793613544 kBMemFree: 4158488 kB SwapTotal:25165820 kB SwapFree: 0 kB HugePages_Total: 262144 HugePages_Free:107711
<05/17/2017 15:05:44> MemTotal: 793613544 kBMemFree: 4137804 kB SwapTotal:25165820 kB SwapFree: 0 kB HugePages_Total: 262144 HugePages_Free:107711
<05/17/2017 15:05:49> MemTotal: 793613544 kBMemFree: 4233544 kB SwapTotal:25165820 kB SwapFree: 0 kB HugePages_Total: 262144 HugePages_Free:107711
<05/17/2017 15:05:54> MemTotal: 793613544 kBMemFree: 3909744 kB SwapTotal:25165820 kB SwapFree: 4 kB HugePages_Total: 262144 HugePages_Free:107711
<05/17/2017 15:05:59> MemTotal: 793613544 kBMemFree: 4269680 kB SwapTotal:25165820 kB SwapFree: 0 kB HugePages_Total: 262144 HugePages_Free:107711
<05/17/2017 15:06:04> MemTotal: 793613544 kBMemFree: 4279380 kB SwapTotal:25165820 kB SwapFree: 4 kB HugePages_Total: 262144 HugePages_Free:107711

<05/17/2017 15:10:49> MemTotal: 793613544 kBMemFree: 4236116 kB SwapTotal:25165820 kB SwapFree:408 kB HugePages_Total: 262144 HugePages_Free:107711
<05/17/2017 15:10:54> MemTotal: 793613544 kBMemFree: 3642732 kB SwapTotal:25165820 kB SwapFree: 0 kB HugePages_Total: 262144 HugePages_Free:107711

 

1.2 The Load average was extremely high.

top - 15:30:28 up 4 days, 17:01, 2 users, load average: 7058.47, 7064.84, 7043.68 <-==================
top - 15:30:36 up 4 days, 17:02, 2 users, load average: 7065.05, 7066.04, 7044.29
top - 15:30:45 up 4 days, 17:02, 2 users, load average: 7077.37, 7068.58, 7045.23
top - 15:30:53 up 4 days, 17:02, 2 users, load average: 7111.28, 7076.14, 7047.94
top - 15:31:08 up 4 days, 17:02, 2 users, load average: 7217.91, 7101.46, 7056.68
top - 15:31:18 up 4 days, 17:02, 2 users, load average: 7271.87, 7116.89, 7062.18
top - 15:31:41 up 4 days, 17:03, 2 users, load average: 7539.17, 7187.29, 7086.47
top - 15:31:53 up 4 days, 17:03, 2 users, load average: 7687.47, 7236.95, 7104.32
top - 15:32:04 up 4 days, 17:03, 2 users, load average: 7792.05, 7274.14, 7117.8

1.3 From 16th May 00hrs we are able to see below process started increasing and load on the system also started increasing.

from Exawatcher log file: 2017_05_15_23_45_38_PsExaWatcher.bz2
Process:
5 S root 372 398263 0 1 19 0 - 9448 4925 wait May14 ? 00:00:32 bash /etc/rc.d/init.d/cellirqbalance daemon
4 S root 375 1 0 23 19 0 - 9488 4825 wait 06:42 ? 00:00:00 bash /etc/rc.d/init.d/cellirqbalance daemon
5 S root 380 398311 0 22 19 0 - 9448 4925 wait May14 ? 00:00:28 bash /etc/rc.d/init.d/cellirqbalance daemon
5 S root 396 397804 0 22 19 0 - 9452 4925 wait 17:45 ? 00:00:05 bash /etc/rc.d/init.d/cellirqbalance daemon
5 S root 405 398332 0 22 19 0 - 9448 4925 wait May13 ? 00:00:34 bash /etc/rc.d/init.d/cellirqbalance daemon
5 S root 407 397692 0 2 19 0 - 9452 4925 wait 01:07 ? 00:00:16 bash /etc/rc.d/init.d/cellirqbalance daemon
4 S root 417 1 0 1 19 0 - 9492 4825 wait 13:38 ? 00:00:00 bash /etc/rc.d/init.d/cellirqbalance daemon
4 S root 424 1 0 22 19 0 - 9488 4825 wait May13 ? 00:00:00 bash /etc/rc.d/init.d/cellirqbalance daemon
4 S root 430 1 0 0 19 0 - 9492 4825 wait May14 ? 00:00:00 bash /etc/rc.d/init.d/cellirqbalance daemon
4 S root 454 1 0 0 19 0 - 9484 4825 wait May13 ? 00:00:00 bash /etc/rc.d/init.d/cellirqbalance daemon
4 S root 488 1 0 0 19 0 - 9488 4825 wait May13 ? 00:00:00 bash /etc/rc.d/init.d/cellirqbalance daemon
5 S root 490 398830 0 0 19 0 - 9452 4925 wait May13 ? 00:00:47 bash /etc/rc.d/init.d/cellirqbalance daemon
5 S root 491 398341 0 23 19 0 - 9448 4925 wait May14 ? 00:00:26 bash /etc/rc.d/init.d/cellirqbalance daemon
5 S root 531 398451 0 1 19 0 - 9448 4925 wait May13 ? 00:00:43 bash /etc/rc.d/init.d/cellirqbalance daemon
5 S root 544 397717 0 2 19 0 - 9444 4925 wait May14 ? 00:00:19 bash /etc/rc.d/init.d/cellirqbalance daemon
5 S root 554 397678 0 6 19 0 - 9448 4925 wait 04:17 ? 00:00:14 bash /etc/rc.d/init.d/cellirqbalance daemon
4 S root 560 1 0 22 19 0 - 9488 4825 wait May13 ? 00:00:00 bash /etc/rc.d/init.d/cellirqbalance daemon
4 S root 564 1 0 24 19 0 - 9488 4825 wait May14 ? 00:00:00 bash /etc/rc.d/init.d/cellirqbalance daemon
5 S root 567 397790 0 0 19 0 - 9452 4925 wait 09:40 ? 00:00:10 bash /etc/rc.d/init.d/cellirqbalance daemon
4 S root 584 1 0 25 19 0 - 9484 4825 wait 08:27 ? 00:00:00 bash /etc/rc.d/init.d/cellirqbalance daemon

1.4 TOP and meminfo show that behavior.

top - 14:35:51 up 4 days, 16:07, 1 user, load average: 6944.30, 6952.81, 6913.86 <============== Vey High Load average.
Tasks: 92191 total, 10 running, 92181 sleeping, 0 stopped, 0 zombie
em: 793613544k total, 788695292k used, 4918252k free, 7764k buffers
Swap: 25165820k total, 25165816k used, 4k free, 989768k cached <============== Swapping.

[2] Lots of cellirqbalance

# ps -ef|grep cellirqbalance| wc -l
66925

[3] DBMCLI> list dbserver detail

DBMCLI> list dbserver detail
name: node02
bbuStatus: normal
coreCount: 44/44
cpuCount: 68/88 >>>>> HERE <<<<<<
kernelVersion: 2.6.39-400.294.1.el6uek.x86_64
locatorLEDStatus: off
makeModel: Oracle Corporation ORACLE SERVER X6-2
metricHistoryDays: 7
msVersion: OSS_12.1.2.3.4_LINUX.X64_170111

on a different node were all is fine

DBMCLI> list dbserver detail
name: node01
bbuStatus: normal
coreCount: 44/44
cpuCount: 88/88
diagHistoryDays: 7

[4] Messages files shows node02 "dbms: resourcecontrol:DBMS is updating the number of active cores per socket."

 

 

 

Cause

To view full details, sign in with your My Oracle Support account.

Don't have a My Oracle Support account? Click to get started!


In this Document
Symptoms
Changes
Cause
Solution
References


My Oracle Support provides customers with access to over a million knowledge articles and a vibrant support community of peers and Oracle experts.