Last updated on SEPTEMBER 13, 2017
Applies to:Solaris Operating System - Version 10 3/05 and later
Information in this document applies to any platform.
File system cache is used for storing application data temporarily in physical memory when the system reads/writes data from/to disk. Considering that accessing data from the disk subsystem is slower (millisecond latency) than physical memory (microsecond latency), data read or written to memory improves application IO performance. The filesystem cache grows and shrinks on demand. Pages are consumed from the freelist (list of free pages) as files are read/written into memory from/to disk. Filesystems work hard to prevent applications from suffering disk IO latency directly, such as by using DRAM to buffer writes and to cache and prefetch reads. Advantages of filesystem cache include:
- Programs that are run frequently may start faster after the first invocation if the required pages are found in the filesystem cache.
- Pages are kept in memory for an indefinite period of time and thus can be reused by other processes (system libraries, shared memory etc..) without accessing the disk.
- A filesystem read ahead (aka. prefetching) feature reads more data than the application requests if a sequential IO pattern is detected. This allows the next adjacent read to be fulfilled from the page cache instead of disk.
- Typically, applications do not write data directly to disk. Instead, the data is written to the filesystem cache. Data is cached in the page cache and later written to the disk. This allows applications to write at memory speed instead of disk speed.
- Delaying writes to the disk allows processes to modify data in memory. This improves performance because several write operations on a page can be satisfied by just one slow physical disk update.
- When a filesystem is running at 99.9% cache hit rate, only a relatively small number of physical read requests reach the disk, reducing physical IO demand.
- Useful File system cache optimization strategies
When an application writes, the data is buffered in a filesystem cache to improve efficiency. However, large numbers of dirty (modified) blocks in the filesystem cache can tie up memory and may cause memory shortages that could negatively affect overall system performance. Filesystems, such as UFS and ZFS, use write throttling to avoid having dirty buffers occupy too much memory. To prevent a single process dirtying too many pages in the filesystem cache, application processes are frequently put to sleep on write() to slow down the dirty buffer growth until storage catches up.
write() to filesystem cache should be instantaneous (an exception to this is files that are open for synchronous writes, O_SYNC|OD_SYNC). However, due to filesystem throttling imposed by UFS and ZFS file systems, applications may frequently block on write() or pwrite() system calls. A typical kernel stack showing an application thread blocking on a pwrite() system call on a ZFS filesytem is shown below:
Sign In with your My Oracle Support account
Don't have a My Oracle Support account? Click to get started
My Oracle Support provides customers with access to over a
Million Knowledge Articles and hundreds of Community platforms