My Oracle Support Banner

ORA-15038 On Diskgroup Mount After Node Eviction (Doc ID 555918.1)

Last updated on JANUARY 30, 2022

Applies to:

Oracle Database - Enterprise Edition - Version 10.2.0.1 to 11.1.0.7 [Release 10.2 to 11.1]
Oracle Database Cloud Schema Service - Version N/A and later
Oracle Database Exadata Cloud Machine - Version N/A and later
Oracle Database Exadata Express Cloud Service - Version N/A and later
Oracle Cloud Infrastructure - Database Service - Version N/A and later
Information in this document applies to any platform.

Symptoms

Two  of the four nodes were evicted  from the cluster.  After the two nodes joined back the cluster,  some diskgroups were mounted, but a particular diskgroup failed.  On the unaffected nodes, all the diskgroups remained mounted.

The errors reported when trying to mount the diskgroup:

 

SQL> alter diskgroup datadg1 mount;
alter diskgroup datadg1 mount
*
ERROR at line 1:
ORA-15032: not all alterations performed
ORA-15063: ASM discovered an insufficient number of disks for diskgroup "DATADG1"
ORA-15038: disk '' size mismatch with diskgroup [1048576] [4096] [512]
ORA-15038: disk '' size mismatch with diskgroup [1048576] [4096] [512]
ORA-15038: disk '' size mismatch with diskgroup [1048576] [4096] [512]


About error ORA-15038:

 

 ORA-15038 (ORA-15038)
Text
: disk '%s' size mismatch with diskgroup [%s] [%s] [%s]
Cause: An attempt was made to mount into a diskgroup a disk whose recorded allocation unit size,
metadata block size, or physical sector size was inconsistent with the other diskgroup members.
Action: Check if the system configuration has changed.

 Call Stack for error ORA-15038

kfgrpJoin <- kfgDiscoverGroup  <- kfgFinalizeMount  <- kfgscFinalize <- kfgForEachKfgsc <- kfgsoFinalize <- kfgFinalize <- kfxdrvMount

 

Table 1. Display information about disks used by diskgroup DATADG1.

PATHGROUP#NAME

MOUNT_STATE

HEADER_STATUSMODE_STATESTATEINST_ID
/dev/raw/raw3 2 DATADG1_0000 CACHED MEMBER ONLINE NORMAL 1
/dev/raw/raw3 2 DATADG1_0000 CACHED MEMBER ONLINE NORMAL 4
/dev/raw/raw4 2 DATADG1_0001 CACHED MEMBER ONLINE NORMAL 1
/dev/raw/raw4 2 DATADG1_0001 CACHED MEMBER ONLINE NORMAL 4
/dev/raw/raw5 2 DATADG1_0002 CACHED MEMBER ONLINE NORMAL 1
/dev/raw/raw5 2 DATADG1_0002 CACHED MEMBER ONLINE NORMAL 4
               
               

 

Important details about the disk configuration were validated, using devices /dev/raw[345]:

  1. The path and bindings for the disks were the same on the working and failing nodes. 
  2. Oracle user had ownership and r/w permissions in both nodes.
  3. The content of the disk header was the same in both nodes, validation done through kfed.
  4. fdisk command returned same configuration  in both nodes.
  5. The diskgroup DATADG1 remained mounted on the working nodes using raw devices raw3,raw4 and raw5.
  6. The complete list of disks used by the mounted diskgroup on the valid nodes was from raw3 up to raw16.

Changes

The problem was reported after two of four node got evicted from the cluster.

 There were additional rows returned from gv$asm_disk:

 

PATHGROUP#NAMEMOUNT_STATEHEADER_STATUSMODE_STATESTATEINST_ID
/dev/raw/raw17 0   IGNORED MEMBER ONLINE NORMAL 1
/dev/raw/raw17 0   IGNORED MEMBER ONLINE NORMAL 2
/dev/raw/raw17 0   IGNORED MEMBER ONLINE NORMAL 3
/dev/raw/raw17 0   IGNORED MEMBER ONLINE NORMAL 4
               
               
               
               

 

The list of devices with MOUNT_STATE value IGNORED was for disks raw17 up to raw30. 

Cause

To view full details, sign in with your My Oracle Support account.

Don't have a My Oracle Support account? Click to get started!


In this Document
Symptoms
Changes
Cause
Solution

My Oracle Support provides customers with access to over a million knowledge articles and a vibrant support community of peers and Oracle experts.