Last updated on FEBRUARY 12, 2013
Applies to:SPARC SuperCluster T4-4 Full Rack - Version All Versions to All Versions [Release All Releases]
SPARC SuperCluster T4-4 Half Rack - Version All Versions to All Versions [Release All Releases]
SPARC SuperCluster T4-4 - Version All Versions to All Versions [Release All Releases]
Oracle Solaris on SPARC (64-bit)
In a SPARC SuperCluster configurations with more than 2 LDoms/Domains like ConfigE/ConfigF of SSC 1.0.1 software version, the first general purpose (GP) domain has virtual devices as its' boot and boot mirror disks. Rebooting any of the I/O domains will cause the zpool of the first GP to be in degraded state and a manual intervention is required to fix/correct the zpool state. Rebooting the first GP domain where the zpool is in a degraded state could lead in zpool data corruption as the mirrors are out of sync and eventually result in unbootable / unstable domain.
The first GP domain has the following virtual disks.
The disk remains in this state until a) The serving domain is back up and b) You manually repair the zpool using 'zpool clear'. If you do not manually repair the zpool, then the disk will remain in this state and will obviously not receive any updates. It will remain a stale mirror indefinitely.
If we reboot all 3 domains simultaneously, then the primary domain will serve it's vdisk before the last domain is able to serve its vdisk. So when the middle domain boots from c0d0 (stale mirror), the up-to-date mirror on c0d1 is not visible yet. ZFS can't know at this point that there is a more up-to-date mirror out there. So it mounts the old mirror and continues as if everything is OK, but having jumped back in time! When the last domain is available and it's mirror is mounted, it just looks "different" so ZFS will re-silver it, and you lose all that later information.
Sign In with your My Oracle Support account
Don't have a My Oracle Support account? Click to get started
Million Knowledge Articles and hundreds of Community platforms