Solaris Volume Manager (SVM) and Solaris Cluster How To Handle Storage Based Data Replication for Disaster Recovery (DR)?
(Doc ID 1012411.1)
Last updated on DECEMBER 07, 2021
Applies to:
Solaris Cluster - Version 3.0 to 3.3 U2 [Release 3.0 to 3.3]Sun Solaris Volume Manager (SVM) - Version 11.9.0 to 11.10.0 [Release 11.0]
Solaris Cluster Geographic Edition - Version 3.0x to 3.3 U2 [Release 3.0 to 3.3]
Oracle Solaris on SPARC (32-bit)
Oracle Solaris on SPARC (64-bit)
Oracle Solaris on x86-64 (64-bit)
Oracle Solaris on x86 (32-bit)
Goal
Data replication enables you to mirror data from one system to another system. The data replication can be done host-based or storage-based. In this document we describe a solution for storage-based data replication with Solaris Volume Manager in Sun Cluster environments.
Assumptions:
There is a Sun cluster on the production side and either a single Solaris system or an other Sun Cluster on the disaster recovery side (mentioned as DRsite in this document).
The replication mechanism of the shared storage is purely storage based, and there is no host based mirroring for the shared storage.
A mapping of the source devices to the target devices has to be available.
In example:
Production site | DRsite | |
Sun Cluster did device d1 | <-----> | Single system physical device c2t3d4 |
or | ||
Sun Cluster did device d1 | <-----> | Sun Cluster did device d2 |
Keep in mind:
Sun Cluster Geographic Edition is available which provide a data replication between geographically separated clusters.
For details and supported possibilities please refer to the documentation:
* Sun Cluster Geographic Edition 3.1 2006Q4 Software Collection
* Sun Cluster Geographic Edition 3.2 11/09 Software Collection
* Oracle Solaris Cluster 3.3 3/13 Geographic Edition Information
How to handle storage based data replication for disaster recovery with Solaris Volume Manager and Sun Cluster?
Requirements:
- A storage based replication like SRDF (EMC) or Truecopy (Hitachi). In this example we used SRDF.
- The quorum disk needs to be configured outside of the replication group.
- The mount at boot option of the replicated file systems have to be no in the file /etc/vfstab. Use HAStoragePlus for pxfs (global filesystem) and FFS (failover filesystem) to control the startup.
- Think about the Auto_start_on_new_cluster property of the resource groups. Maybe it's necessary to set it to false to control the startup of resource groups because they should not start automatically when the storage is not available.
- To simplify this mapping it would be optimal to keep the did/device naming equal on the two sites. However in some configurations in example N production systems covered by 1 target system it may not be achievable. This mapping is essential and needs to be precise.
- The replicating device group is configured in one or more source metasets. For each metaset the configuration file md.tab is configured and actual by manual storing the output of "metastat -s <setname> -p" in the md.tab file.
Solution
To view full details, sign in with your My Oracle Support account. |
|
Don't have a My Oracle Support account? Click to get started! |