Issue with ZFS SA NFS Plugin for Solaris Cluster 3.3_u2 with ZFS SA version 2013.1.7.x (AK 8.7.x)
Last updated on FEBRUARY 20, 2018
Applies to:Oracle SuperCluster M7 Hardware - Version All Versions to All Versions [Release All Releases]
Oracle SuperCluster T5-8 Hardware - Version All Versions to All Versions [Release All Releases]
SPARC SuperCluster T4-4 - Version All Versions to All Versions [Release All Releases]
Oracle SuperCluster Specific Software - Version 1.x to 2.x [Release 1.0 to 2.0]
Oracle SuperCluster T5-8 Half Rack - Version All Versions to All Versions [Release All Releases]
Information in this document applies to any platform.
SuperCluster systems installed with the Oct QFSDP (or later) or SuperCluster Install Bundle 2.4.16 (or later 2.4.x builds), will have ZFS Storage Appliance software version of AK 2013.1.8.7 (or later) installed.
AK 2013.1.8.7 software will require v1.0.5 of the ZFS SA NFS Plugin installed for use with Solaris Cluster 3.3_u2. Version 1.0.5 of the ZFS SA NFS Plugin in turn has a hard dependency on Java 1.7.0 Update 151 (or later Java 7 updates), as well as ensuring that Java 7 Update 151 (or a later Java 7 Updates) is the default system-wide version of Java.
While it is possible for multiple versions of Java to co-exist on a single system, only one version can be the default system-wide version (i.e. the java instance symlinked to /usr/java). With this in mind, please be aware of the need to make Java 7 Update 151 (or a later Java 7 update) the default system-wide version of Java used. Put another way, if an application running under Solaris Cluster 3.3_2 requires a default system-wide version of Java 7 that is less than Update 151, or any version of Java 5, Java 6 or Java 8 to be the default system-wide version of Java on the system, then until there is a fix for this issue the SuperCluster October QFSDP or SuperCluster Installation Build 2.4.15 (or later) will not be compatible with such configurations.
This note details how to check and install both v1.0.5 of the ZFS SA Plugin and Java 7 Update 151 for systems running Solaris Cluster 3.3_u2, as well as making Java 7 Update 151 the default system-wide version of Java on the system.
It is assumed that Solaris Cluster 3.3_u2 is running at a core patch level of at least patch 145333-37.
With and incompatible version of the ZFS SA NFS Plugin (eg. 1.0.0) installed failures will be seen when doing NFS fencing/unfencing and when cluster storage resources are attempting to be brought online. Output similar to that below will be seen due to the storage resource failing to start successfully:
# /usr/cluster/bin/cluster status === Cluster Resource Groups === Group Name Node Name Suspended State ---------- --------- --------- ----- lh-rg ssc1-appio0101-bzc1 No Pending_online_blocked ssc1-appio0201-bzc1 No Offline scalmnt-rg ssc1-appio0101-bzc1 No Online_faulted ssc1-appio0201-bzc1 No Online_faulted === Cluster Resources === Resource Name Node Name State Status Message ------------- --------- ----- -------------- apache-rs ssc1-appio0101-bzc1 Offline Offline ssc1-appio0201-bzc1 Offline Offline lh-rs ssc1-appio0101-bzc1 Online Online - LogicalHostname online. ssc1-appio0201-bzc1 Offline Offline apache-scalmnt-rs ssc1-appio0101-bzc1 Start_failed Faulted ssc1-appio0201-bzc1 Start_failed Faulted
When this occurs in a branded zone cluster, messages similar to the following can be seen in /var/adm/messages of the non-global zone:
and the exact symptom of the failure to do NFS fencing/unfencing will be reflected in messages similar to the following in global zone's /var/adm/messages file:
Nov 3 09:49:48 ssc1-appioadm0101 ssc1-appio0201-bzc1 is obtaining access to shared ZFS Storage Appliances.
Nov 3 09:49:48 ssc1-appioadm0101 cl_runtime: [ID 702911 kern.notice] NOTICE: Nov 3, 2017 9:49:48 AM com.sun.s7000.client.OSCNFSCfg executeWorkflow
Nov 3 09:49:48 ssc1-appioadm0101 SEVERE: Cannot make calls to the target system https://192.168.28.1:215/ak. Check the target system port number, DNS location, or IP Address.:192.168.28.1
In addition, the 'clnas' Solaris Cluster command will not be able to communicate with the ZFS SA and one would see output similar to the following when attempting to run 'clnas':
clnas: Cannot access device "192.168.28.1" with the user ID and password
saved in the cluster.
=== NAS Devices ===
Nas Device: 192.168.28.1
Sign In with your My Oracle Support account
Don't have a My Oracle Support account? Click to get started
My Oracle Support provides customers with access to over a
Million Knowledge Articles and hundreds of Community platforms