Oracle ZFS Storage Appliance: Does the ZFS Storage Appliance Reserve 1/64 of Data Pool Space ?
Last updated on JUNE 11, 2018
Applies to:Sun ZFS Storage 7420 - Version All Versions to All Versions [Release All Releases]
Sun ZFS Storage 7320 - Version All Versions to All Versions [Release All Releases]
Sun ZFS Storage 7120 - Version All Versions to All Versions [Release All Releases]
Oracle ZFS Storage ZS3-2 - Version All Versions to All Versions [Release All Releases]
Oracle ZFS Storage ZS3-4 - Version All Versions to All Versions [Release All Releases]
7000 Appliance OS (Fishworks)
In ZFS Engineered storage, we can often see slowness in I/O with clients. With LUNS we can also see them going offline due to the fact that the data pool is critically low on free space.
This free space can be seen from the BUI in later AKD releases - but older version need to have TSC Support check the support bundle or join a shared shell to check the free space at the command line with zfs list
In other vendor's ZFS implementations, you might see a reserved space of approx either 1/32 or 1/64 of the raw pool space left for writes to continue (sometimes called "slop").
In the ZFS-SA, we do not do that, so it is possible for a data pool to become 100% full with no free space left.
This will not be seen if you use zpool list or the BUI at the Storage configuration page - as that lists all "free" space that is unused in LUNs or filesystem reservations.
It is important to understand that so called "free space" inside LUNs or reserved is not available for the ZFS filesystem to use, it is the free available that is important from zfs list.
We only officially publish a limited set of documents and references, these are some of them for reference:-
Sun Storage 7000 Unified Storage System: How much free space to leave when configuring a pool with LUNs (Doc ID 1995759.1)
Sun Storage 7000 Unified Storage System: How to Troubleshoot ZFS System Pool Issues (Doc ID 1388529.1)
Oracle ZFS Storage Appliance: Writes to an iSCSI or FC LUN fail with medium errors, filesystem becomes read-only (Doc ID 1469814.1)
Generic ZFS document (the ZFS storage is based on)
Relevant quote - "In addition, set a reservation on a dummy file system to reserve 10-20% of pool space to maintain pool performance."
ZFS Pool Capacity Recommendations:
If data is mostly added (write once, remove never), then it's very easy for a redirect on write architecture such as ZFS to find new blocks. Here the percentage full when performance
is impacted would be more than the provided generic rule of thumb, say 95%.
Similarly, if data is made of large files/large blocks (128K or 1MB) where data is removed in bulk operations, then the rule of thumb can also be relaxed.
The other end of the spectrum is where a large percentage of the pool (say 50% or more) is made of 8K chunks (DB files, iSCSI LUNs, or many small files) with constant
rewrites (dynamic data), then the 90% rule of thumb needs to be followed strictly.
If 100% of the data is small blocks of dynamic data, then you should monitor your pool closely. Possibly starting as early as 80%.
The sign to watch for is increased disk IOPS to achieve the same level of client IOPS.
Sign In with your My Oracle Support account
Don't have a My Oracle Support account? Click to get started
Million Knowledge Articles and hundreds of Community platforms