Relabeling Performance Issues In Multi-level ZFS
Last updated on JULY 29, 2016
Applies to:Solaris Operating System - Version 11.3 and later
Information in this document applies to any platform.
Need Solaris Trusted Extensions and multi-level ZFS setup
Customers are using a ZFS multi-level file system to transfer data from one label to another. They use it so they can do an atomic move. That is, they can move and relabel without doing a copy operation.
In preparation for moving the file to DOMAINB it is first relabeled to MDEX_HIGH (ADMIN_HIGH)
Next the file is moved/renamed to a subdirectory of /multi/DOMAINB and given the label of DOMAINB.
This should be an atomic move process since all processing is done within the same dataset. The tranquility issue appears during the above relabeling process. On a noticeable percentage of the files, customer receives errors that the relabel fails. On the remaining files they are processed correctly from end to end. The actual relabel process is performed by utilizing the “setflabel” extended library function.
Issue #1: Customer has tranquility issues. That is, we had to wait before the relabel was successful.
Issue #2: Performance of data flow is very poor. Roughly 15% of Solaris 10 numbers. Doing some Dtrace probes shows that over 50% of the ZFS operations on the files are opening, checking attributes, closing of files like the *_attr files.
Cache mnttab entries. If we implement only this, multiple getment() calls are avoided at the cost of one stat() call.
If we turn on libzfs cache, the getextmntent() calls go away, but we still see a lot of zfs getstat and getprop calls.
So instead of turning on the libzfs cache, we cache the 'multilevel' property in the mnttab cache. This removes all ioctl's from labeld.
These changes resulted in over 4X improvements in relabeling process.
Sign In with your My Oracle Support account
Don't have a My Oracle Support account? Click to get started
Million Knowledge Articles and hundreds of Community platforms