My Oracle Support Banner

Sparse Disk Group resize/shrinking issues : OS_MB is not matching with Total_MB post resize (Doc ID 2831147.1)

Last updated on FEBRUARY 15, 2022

Applies to:

Oracle Exadata Storage Server Software - Version 11.1.3.1.0 to 21.2.4.0.0 [Release 11.1 to 21]
Oracle Database - Enterprise Edition - Version 12.1.0.2 to 21.4 [Release 12.1 to 21.0]
Information in this document applies to any platform.

Symptoms

Exadata X8M-2 (On-Prem) with three X8-2L HC Storage cells (celadm01-03).
Image: 21.2.4.0.0.210909 ExaRU
Grid Infrastructure: 19.12.0.0.210720 RU.


S01_SPSE is a Sparse Diskgroup with 36 grid disks (12 each per Cell) , each with size 280G. For e.g.

celadm01: S01_SPSE_CD_00_celadm01 284G
celadm02: S01_SPSE_CD_00_celadm02 284G
celadm03: S01_SPSE_CD_00_celadm03 284G

virtualSize of each disk is 2.7734375T

Customer has requirement to reduce size of this diskgroup (DG) by half. To achieve same, below command was issued at ASM level :

SQL> alter diskgroup S01_SPSE resize all size 145408M rebalance power 1024;

Diskgroup altered.

Once rebalance was completed, grid disks on all three cells were also resized to 142G. For e.g. on cell 01

[root@celadm01 ~]# cellcli -e \
alter griddisk \
S01_SPSE_CD_00_celadm01 \
,S01_SPSE_CD_01_celadm01 \
,S01_SPSE_CD_02_celadm01 \
,S01_SPSE_CD_03_celadm01 \
,S01_SPSE_CD_04_celadm01 \
,S01_SPSE_CD_05_celadm01 \
,S01_SPSE_CD_06_celadm01 \
,S01_SPSE_CD_07_celadm01 \
,S01_SPSE_CD_08_celadm01 \
,S01_SPSE_CD_09_celadm01 \
,S01_SPSE_CD_10_celadm01 \
,S01_SPSE_CD_11_celadm01 \
size=145408M;

Post resize (only one disk from each fail-group is shown below for simplicity):

celadm01: S01_SPSE_CD_00_celadm01 142G
celadm02: S01_SPSE_CD_00_celadm02 142G
celadm03: S01_SPSE_CD_00_celadm03 142G

ASMCMD reporting size correctly (142G*36 = 5112G = 5234688M)

$ asmcmd lsdg
State Type Rebal Sector Logical_Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED NORMAL N 512 512 4096 4194304 5234688 5232060 145408 2543326 0 N S01_SPSE/

However, OS_MB of each disk still reported as 2908160M; TOTAL_MB looks fine though (1454080M= 142G).

GROUP_NUMBER DISK_NUMBER MOUNT_S HEADER_STATU MODE_ST STATE OS_MB TOTAL_MB FREE_MB NAME FAILGROUP PATH
3 0 CACHED MEMBER ONLINE NORMAL 2908160 145408 145332 S01_SPSE_CD_00_CELADM01 CELADM01 o/192.168.13.13;192.168.13.14/S01_SPSE_CD_00_celadm01
3 1 CACHED MEMBER ONLINE NORMAL 2908160 145408 145332 S01_SPSE_CD_01_CELADM01 CELADM01 o/192.168.13.13;192.168.13.14/S01_SPSE_CD_01_celadm01
3 2 CACHED MEMBER ONLINE NORMAL 2908160 145408 145328 S01_SPSE_CD_02_CELADM01 CELADM01 o/192.168.13.13;192.168.13.14/S01_SPSE_CD_02_celadm01
...

Kfod too returns OS disk size as 2908160M

$ kfod disks=all |grep -i S01_SPSE
25: 2908160 MB o/192.168.13.13;192.168.13.14/S01_SPSE_CD_00_celadm01
61: 2908160 MB o/192.168.13.15;192.168.13.16/S01_SPSE_CD_00_celadm02
97: 2908160 MB o/192.168.13.17;192.168.13.18/S01_SPSE_CD_00_celadm03

Tried below options but none helped:
a) dismount/mount of DG
b) cluster restart
c) offline/online of disks

Cause

To view full details, sign in with your My Oracle Support account.

Don't have a My Oracle Support account? Click to get started!


In this Document
Symptoms
Cause
Solution
References


My Oracle Support provides customers with access to over a million knowledge articles and a vibrant support community of peers and Oracle experts.