My Oracle Support Banner

Exadata:mdadm displays "removed" partition devices in cell node. (Doc ID 2771628.1)

Last updated on FEBRUARY 10, 2022

Applies to:

Exadata X6-2 Hardware - Version All Versions and later
Information in this document applies to any platform.

Symptoms

Customer has applied patch and upgraded cellnode1 and there after one of the raid partition is missing and were in removed state in <CELLNODE01> .

 

<CEELNODE02/03> are showing correct results as below.

Single raid partition was observed in cellnode01.

Two raid partitions were displaying in cellnode02/03..

d

cli -g /opt/oracle.SupportTools/onecommand/cell_group -l root 'for x in 1 2 5 6 7 8 11; do mdadm --detail /dev/md$x | grep /dev/; done'
CELLNODE01: /dev/md1:>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
CELLNODE01: 0 8 10 0 active sync /dev/sda10
CELLNODE01: /dev/md2:
CELLNODE01: 0 8 9 0 active sync /dev/sda9
CELLNODE01: /dev/md5:>>>>>>>>>>>>>>>>>>>>>>>>>>
CELLNODE01: 0 8 5 0 active sync /dev/sda5
CELLNODE01: /dev/md6:
CELLNODE01: 0 8 6 0 active sync /dev/sda6
CELLNODE01: /dev/md7:
CELLNODE01: 0 8 7 0 active sync /dev/sda7
CELLNODE01: /dev/md8:
CELLNODE01: 0 8 8 0 active sync /dev/sda8
CELLNODE01: /dev/md11:
CELLNODE01: 0 8 11 0 active sync /dev/sda11
CELLNODE02: /dev/md1:>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
CELLNODE02: 0 8 10 0 active sync /dev/sda10
CELLNODE02: 1 8 26 1 active sync /dev/sdb10
CELLNODE02: /dev/md2:
CELLNODE02: 0 8 9 0 active sync /dev/sda9
CELLNODE02: 1 8 25 1 active sync /dev/sdb9
CELLNODE02: /dev/md5:>>>>>>>>>>>>>>>>>>>>>>>
CELLNODE02: 0 8 5 0 active sync /dev/sda5
CELLNODE02: 1 8 21 1 active sync /dev/sdb5
CELLNODE02: /dev/md6:
CELLNODE02: 0 8 6 0 active sync /dev/sda6
CELLNODE02: 1 8 22 1 active sync /dev/sdb6
CELLNODE02: /dev/md7:
CELLNODE02: 0 8 7 0 active sync /dev/sda7
CELLNODE02: 1 8 23 1 active sync /dev/sdb7
CELLNODE02: /dev/md8:
CELLNODE02: 0 8 8 0 active sync /dev/sda8
CELLNODE02: 1 8 24 1 active sync /dev/sdb8
CELLNODE02: /dev/md11:
CELLNODE02: 0 8 11 0 active sync /dev/sda11
CELLNODE02: 1 8 27 1 active sync /dev/sdb11
CELLNODE03: /dev/md1:
CELLNODE03: 0 8 10 0 active sync /dev/sda10
CELLNODE03: 1 8 26 1 active sync /dev/sdb10
CELLNODE03: /dev/md2:
CELLNODE03: 0 8 9 0 active sync /dev/sda9
CELLNODE03: 1 8 25 1 active sync /dev/sdb9
CELLNODE03: /dev/md5:>>>>>>>>>>>>>>>>>>>
CELLNODE03: 0 8 5 0 active sync /dev/sda5
CELLNODE03: 1 8 21 1 active sync /dev/sdb5
CELLNODE03: /dev/md6:
CELLNODE03: 0 8 6 0 active sync /dev/sda6
CELLNODE03: 1 8 22 1 active sync /dev/sdb6
CELLNODE03: /dev/md7:>>>>>>>>>>>>>>>>>>>>>>>
CELLNODE03: 0 8 7 0 active sync /dev/sda7
CELLNODE03: 1 8 23 1 active sync /dev/sdb7
CELLNODE03: /dev/md8:
CELLNODE03: 0 8 8 0 active sync /dev/sda8
CELLNODE03: 1 8 24 1 active sync /dev/sdb8
CELLNODE03: /dev/md11:
CELLNODE03: 0 8 11 0 active sync /dev/sda11
CELLNODE03: 1 8 27 1 active sync /dev/sdb11
[root@cellnode01]#

 

dcli -g /opt/oracle.SupportTools/onecommand/cell_group -l root 'cat /proc/mdstat'
CELLNODE01: Personalities : [raid1]
CELLNODE01: md6 : active raid1 sda6[0]
CELLNODE01: 10485696 blocks [2/1] [U_]
CELLNODE01:
CELLNODE01: md7 : active raid1 sda7[0]
CELLNODE01: 3145664 blocks [2/1] [U_]
CELLNODE01:
CELLNODE01: md8 : active raid1 sda8[0]
CELLNODE01: 3145664 blocks [2/1] [U_]
CELLNODE01:
CELLNODE01: md5 : active raid1 sda5[0]
CELLNODE01: 10485696 blocks [2/1] [U_]
CELLNODE01:
CELLNODE01: md2 : active (auto-read-only) raid1 sda9[0]>>>>read only.
CELLNODE01: 2097088 blocks [2/1] [U_]
CELLNODE01:
CELLNODE01: md1 : active (auto-read-only) raid1 sda10[0]>>>>>>>>>>>>>>>read-only
CELLNODE01: 714752 blocks [2/1] [U_]
CELLNODE01:
CELLNODE01: md11 : active raid1 sda11[0]
CELLNODE01: 5242752 blocks [2/1] [U_]
CELLNODE01:
CELLNODE01: md4 : active raid1 sda1[0]
CELLNODE01: 120384 blocks [2/1] [U_]
CELLNODE01:
CELLNODE01: unused devices: <none>
CELLNODE02: Personalities : [raid1]
CELLNODE02: md8 : active raid1 sdb8[1] sda8[0]
CELLNODE02: 3145664 blocks [2/2] [UU]
CELLNODE02:
CELLNODE02: md7 : active raid1 sdb7[1] sda7[0]
CELLNODE02: 3145664 blocks [2/2] [UU]
CELLNODE02:
CELLNODE02: md6 : active raid1 sdb6[1] sda6[0]
CELLNODE02: 10485696 blocks [2/2] [UU]
CELLNODE02:
CELLNODE02: md2 : active raid1 sdb9[1] sda9[0]
CELLNODE02: 2097088 blocks [2/2] [UU]
CELLNODE02:
CELLNODE02: md1 : active raid1 sdb10[1] sda10[0]>>>>>>>>>>>>>
CELLNODE02: 714752 blocks [2/2] [UU]
CELLNODE02:
CELLNODE02: md11 : active raid1 sdb11[1] sda11[0]
CELLNODE02: 5242752 blocks [2/2] [UU]
CELLNODE02:
CELLNODE02: md5 : active raid1 sdb5[1] sda5[0]>>>>>>>>>>>>>>>>>>>>
CELLNODE02: 10485696 blocks [2/2] [UU]
CELLNODE02:
CELLNODE02: md4 : active raid1 sdb1[1] sda1[0]
CELLNODE02: 120384 blocks [2/2] [UU]
CELLNODE02:

 

//From mdadm-Q_detail.out in sundiag_cellnode01_1704NM706T_2021_04_24_17_16.tar.bz2

/dev/md1:
Version : 0.90
Creation Time : Thu Apr 27 08:58:07 2017
Raid Level : raid1
Array Size : 714752 (698.00 MiB 731.91 MB)
Used Dev Size : 714752 (698.00 MiB 731.91 MB)
Raid Devices : 2
Total Devices : 1
Preferred Minor : 1
Persistence : Superblock is persistent

Update Time : Sat Nov 17 13:07:34 2018
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0

Consistency Policy : resync

UUID : 952b04cd:5bb4df76:04894333:532a878b
Events : 0.18

Number Major Minor RaidDevice State
0 8 10 0 active sync /dev/sda10
- 0 0 1 removed>>>>>>>>>>>>>>>>>>>>>>>>>>removed
/dev/md2:
Version : 0.90
Creation Time : Thu Apr 27 08:58:07 2017
Raid Level : raid1
Array Size : 2097088 (2047.94 MiB 2147.42 MB)
Used Dev Size : 2097088 (2047.94 MiB 2147.42 MB)
Raid Devices : 2
Total Devices : 1
Preferred Minor : 2
Persistence : Superblock is persistent

Update Time : Sat Nov 17 13:07:34 2018
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0

Consistency Policy : resync

UUID : 1b062297:cf0ac238:04894333:532a878b
Events : 0.20

Number Major Minor RaidDevice State
0 8 9 0 active sync /dev/sda9
- 0 0 1 removed
/dev/md4:
Version : 0.90
Creation Time : Thu Apr 27 08:58:08 2017
Raid Level : raid1
Array Size : 120384 (117.56 MiB 123.27 MB)
Used Dev Size : 120384 (117.56 MiB 123.27 MB)
Raid Devices : 2
Total Devices : 1
Preferred Minor : 4
Persistence : Superblock is persistent

Update Time : Sat Apr 24 12:48:35 2021
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0

Consistency Policy : resync

UUID : bdccd04b:d323018f:04894333:532a878b
Events : 0.70

Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
- 0 0 1 removed
/dev/md5:
Version : 0.90
Creation Time : Thu Apr 27 08:58:08 2017
Raid Level : raid1
Array Size : 10485696 (10.00 GiB 10.74 GB)
Used Dev Size : 10485696 (10.00 GiB 10.74 GB)
Raid Devices : 2
Total Devices : 1
Preferred Minor : 5
Persistence : Superblock is persistent

Update Time : Sat Apr 24 17:22:10 2021
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0

Consistency Policy : resync

UUID : 6627a256:d88e9f98:04894333:532a878b
Events : 0.13676

Number Major Minor RaidDevice State
0 8 5 0 active sync /dev/sda5
- 0 0 1 removed>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
/dev/md6:
Version : 0.90
Creation Time : Thu Apr 27 08:58:09 2017
Raid Level : raid1
Array Size : 10485696 (10.00 GiB 10.74 GB)
Used Dev Size : 10485696 (10.00 GiB 10.74 GB)
Raid Devices : 2
Total Devices : 1
Preferred Minor : 6
Persistence : Superblock is persistent

Update Time : Sat Apr 24 14:06:50 2021
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0

Consistency Policy : resync

UUID : c55252ee:20b86356:04894333:532a878b
Events : 0.83

Number Major Minor RaidDevice State
0 8 6 0 active sync /dev/sda6
- 0 0 1 removed
/dev/md7:
Version : 0.90
Creation Time : Thu Apr 27 08:58:09 2017
Raid Level : raid1
Array Size : 3145664 (3.00 GiB 3.22 GB)
Used Dev Size : 3145664 (3.00 GiB 3.22 GB)
Raid Devices : 2
Total Devices : 1
Preferred Minor : 7
Persistence : Superblock is persistent

Update Time : Sat Apr 24 17:22:07 2021
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0

Consistency Policy : resync

UUID : a2759ea5:ce57a376:04894333:532a878b
Events : 0.5343

Number Major Minor RaidDevice State
0 8 7 0 active sync /dev/sda7
- 0 0 1 removed>>>>>>>>>>>>>>>>>>>>>>>>>
/dev/md8:
Version : 0.90
Creation Time : Thu Apr 27 08:58:09 2017
Raid Level : raid1
Array Size : 3145664 (3.00 GiB 3.22 GB)
Used Dev Size : 3145664 (3.00 GiB 3.22 GB)
Raid Devices : 2
Total Devices : 1
Preferred Minor : 8
Persistence : Superblock is persistent

Update Time : Sat Apr 24 14:06:51 2021
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0

Consistency Policy : resync

UUID : 3400823c:7cf58385:04894333:532a878b
Events : 0.31

Number Major Minor RaidDevice State
0 8 8 0 active sync /dev/sda8
- 0 0 1 removed
/dev/md11:
Version : 0.90
Creation Time : Thu Apr 27 08:58:09 2017
Raid Level : raid1
Array Size : 5242752 (5.00 GiB 5.37 GB)
Used Dev Size : 5242752 (5.00 GiB 5.37 GB)
Raid Devices : 2
Total Devices : 1
Preferred Minor : 11
Persistence : Superblock is persistent

Update Time : Sat Apr 24 17:22:07 2021
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0

Consistency Policy : resync

UUID : d17ce6c2:126cccac:04894333:532a878b
Events : 0.4660

Number Major Minor RaidDevice State
0 8 11 0 active sync /dev/sda11
- 0 0 1 removed>>>>>>>>>>>>>>>>>

 

Changes

Performed cell reboot after patching.

Cause

To view full details, sign in with your My Oracle Support account.

Don't have a My Oracle Support account? Click to get started!


In this Document
Symptoms
Changes
Cause
Solution


My Oracle Support provides customers with access to over a million knowledge articles and a vibrant support community of peers and Oracle experts.