My Oracle Support Banner

Issue with ZFS SA NFS Plugin for Solaris Cluster 3.3_u2 with ZFS SA version 2013.1.7.x (AK 8.7.x) (Doc ID 2326715.1)

Last updated on JUNE 11, 2020

Applies to:

Oracle SuperCluster M7 Hardware - Version All Versions to All Versions [Release All Releases]
Oracle SuperCluster T5-8 Hardware - Version All Versions to All Versions [Release All Releases]
SPARC SuperCluster T4-4 - Version All Versions to All Versions [Release All Releases]
Oracle SuperCluster Specific Software - Version 1.x to 2.x [Release 1.0 to 2.0]
Oracle SuperCluster T5-8 Half Rack - Version All Versions to All Versions [Release All Releases]
Information in this document applies to any platform.

Symptoms

SuperCluster systems installed with the Oct QFSDP (or later) or SuperCluster Install Bundle 2.4.16 (or later 2.4.x builds), will have ZFS Storage Appliance software version of AK 2013.1.8.7 (or later) installed.
AK 2013.1.8.7 software will require v1.0.5 of the ZFS SA NFS Plugin installed for use with Solaris Cluster 3.3_u2. Version 1.0.5 of the ZFS SA NFS Plugin in turn has a hard dependency on Java 1.7.0 Update 151 (or later Java 7 updates), as well as ensuring that Java 7 Update 151 (or a later Java 7 Updates) is the default system-wide version of Java.

While it is possible for multiple versions of Java to co-exist on a single system, only one version can be the default system-wide version (i.e. the java instance symlinked to /usr/java). With this in mind, please be aware of the need to make Java 7 Update 151 (or a later Java 7 update) the default system-wide version of Java used. Put another way, if an application running under Solaris Cluster 3.3_2 requires a default system-wide version of Java 7 that is less than Update 151, or any version of Java 5, Java 6 or Java 8 to be the default system-wide version of Java on the system, then until there is a fix for this issue the SuperCluster October QFSDP or SuperCluster Installation Build 2.4.15 (or later) will not be compatible with such configurations.

This note details how to check and install both v1.0.5 of the ZFS SA Plugin and Java 7 Update 151 for systems running Solaris Cluster 3.3_u2, as well as making Java 7 Update 151 the default system-wide version of Java on the system.

It is assumed that Solaris Cluster 3.3_u2 is running at a core patch level of at least patch 145333-37.

 

With and incompatible version of the ZFS SA NFS Plugin (eg. 1.0.0) installed failures will be seen when doing NFS fencing/unfencing and when cluster storage resources are attempting to be brought online. Output similar to that below will be seen due to the storage resource failing to start successfully:

# /usr/cluster/bin/cluster status

=== Cluster Resource Groups ===

Group Name   Node Name                Suspended   State
----------   ---------                ---------   -----
group1        ssc1-1      No          Pending_online_blocked
             ssc1-c2      No          Offline

group2   ssc1-c1      No          Online_faulted
             ssc1-c2      No          Online_faulted


=== Cluster Resources ===

Resource Name       Node Name             State          Status Message
-------------       ---------             -----          --------------
resource1           ssc1-c1   Offline        Offline
                    ssc1-c2   Offline        Offline

resource2               ssc1-c1   Online         Online - LogicalHostname online.
                    ssc1-c2   Offline        Offline

resource3   ssc1-c1   Start_failed   Faulted
                    ssc1-c2   Start_failed   Faulted


When this occurs in a branded zone cluster, messages similar to the following can be seen in /var/adm/messages of the non-global zone:

Nov 3 09:48:25 ssc1-c1 SC[SUNW.ScalMountPoint:4,group2,resource3,/usr/cluster/lib/rgm/rt/scal_mountpoint/scal_mountpoint_prenet_start]: Failed to get information for filer "192.x.x.1"

and the exact symptom of the failure to do NFS fencing/unfencing will be reflected in messages similar to the following in global zone's /var/adm/messages file:

Nov 3 09:49:48 ssc1-c1 cl_runtime: [ID 702911 kern.notice] NOTICE: obtaining access to all attached disks for ssc1-c1
Nov 3 09:49:48 ssc1-c1 is obtaining access to shared ZFS Storage Appliances.
Nov 3 09:49:48 ssc1-c1 cl_runtime: [ID 702911 kern.notice] NOTICE: Nov 3, 2017 9:49:48 AM com.sun.s7000.client.OSCNFSCfg executeWorkflow
Nov 3 09:49:48 ssc1-c1 SEVERE: Cannot make calls to the target system https://192.x.x.1:215/ak. Check the target system port number, DNS location, or IP Address.:192.x.x.1

  
In addition, the 'clnas' Solaris Cluster command will not be able to communicate with the ZFS SA and one would see output similar to the following when attempting to run 'clnas':

# /usr/cluster/bin/clnas show -v -d all
clnas: Cannot access device "192.x.x.1" with the user ID and password
saved in the cluster.

=== NAS Devices ===

Nas Device: 192.x.x.1
Type: sun_uss
userid: osc_agent
nodeIPs{ssc1-c1}: 192.x.x.72
nodeIPs{ssc1-c2}: 192.x.x.71

#

 

Cause

To view full details, sign in with your My Oracle Support account.

Don't have a My Oracle Support account? Click to get started!


In this Document
Symptoms
Cause
Solution
References

My Oracle Support provides customers with access to over a million knowledge articles and a vibrant support community of peers and Oracle experts.