Smoothing bursts of disk load in a file system

Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S113000, C711S170000, C710S056000, C707S793000

Reexamination Certificate

active

06678794

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
This invention relates to a computer cache memory management system. More particularly this invention relates to a technique of smoothing disk load in a computer cache management system that employs asynchronous write requests.
2. Description of the Related Art
From time to time during execution of computer applications, requests are made to the computer's file system to write or read data to or from computer files, which incurs disk input/output (“I/O”). Generally computer file systems partition disk space in working units known as blocks, which are variously assigned to files. A write request may be either synchronous or asynchronous. On receipt of a synchronous write request by an application, the file system immediately writes file data to disk blocks, and the requesting application is required to wait until the modification to the file arrives at the disk.
It is a drawback of using synchronous requests that if an application makes frequent requests for the file system to write to the same disk blocks, the application will be considerably slowed by the need for the application to wait during the pendency of each successive write request. Consequently it is common for file systems to utilize asynchronous write requests, in which the data is initially brought from the disk into an internal memory or cache buffer. Subsequent write requests are directed to the cache buffer, rather than directly to the disk. Typically a cache buffer stores the contents of one disk block, or a part thereof. On an asynchronous write request, in which the disk block is to be modified, the cache buffer is modified, and flagged accordingly. Such modified cache buffers are termed “dirty”. Unmodified cache buffers are termed “clean”. Once the cache buffer has been flagged as dirty, it, rather than the disk block, contains the most up-to-date information. With this approach, instead of awaiting completion of a sequence of relatively slow disk I/O operations, the application at most only waits for one I/O operation, during which the disk block is fetched into the cache. If the block is already resident in the cache, then even this I/O operation is avoided. Subsequent write requests only produce accesses to the cache, which are much more rapid than disk accesses.
Caching disk blocks may be helpful also for read requests. Consider, for example, a read access pattern, in which there are multiple requests to read the same block during a short time interval. If the block is read from disk and transferred to a buffer, only the first read request of the block will cause an I/O operation. All the subsequent requests will be served much faster than the first one, due to presence of the block's contents in the cache.
A weakness of using asynchronous requests is that modifications to files are stored in memory buffers and on a system crash they are lost. Ideally, to prevent information loss, all the dirty cached buffers should be written to the disk just prior to the crash. However, it is generally impractical to predict a crash. In order to protect the buffered information against being lost, a file system may run a sync process or daemon, such as the known daemon syncd. This process periodically writes, or flushes, the contents of all dirty buffers back onto the disk. Immediately following the flushing operation, the contents of the cache buffer and the corresponding block of the disk are identical. Later, as a result of new write requests, the flushed buffers in the cache are again modified, and they, rather than the disk blocks, contain the last modifications of the files.
Typically the daemon syncd is coded as an infinite loop, which consists of calling a function sync. The function sync flushes all the cached dirty buffers, followed by a sleep of some period of time. As used herein the term “sync” means an invocation of the function sync by the daemon syncd or a similar daemon.
There are a number of difficulties associated with prior art implementations of the daemon syncd, the amelioration of which are the concern of the present invention. It may be noted that the daemon syncd does not guarantee full protection of buffered data in the cache against a system crash. Indeed, if a system crash happens while the daemon syncd is sleeping, modifications made since the last sync are lost. To overcome this difficulty a file system may also provide a synchronous write operation, as there is a class of write requests which must not be lost. For example some updates of internal file system structures must arrive at the disk before the file system can continue operations. Furthermore, an application can explicitly direct the file system to process its write requests synchronously.
As described above, the daemon syncd periodically flushes all dirty buffers to the disk and sleeps the rest of the time. This periodicity of activity on the part of the daemon syncd induces a bursty pattern of disk load. Due to such bursts or disk activity, flushing a dirty buffer may take significantly longer than would be required to write the buffer between such bursts. This delay, in turn, may slow down executing applications in the following manner. While the daemon syncd is flushing buffers associated with a file, various locks on the file's inode, its indirect blocks and other internal file structures may be held. In consequence all other processes which need the resources secured by the locks must wait until the daemon syncd finishes flushing the file and release the locks. Furthermore, all the applications currently asserting synchronous write requests will also be delayed.
Since a cache buffer is typically smaller than the amount of information, written to disk, in practice all the cache buffers quickly become occupied. When this situation prevails, in order to fulfill a request to access a disk block which is not currently buffered, one of the occupied cache buffers must be “stolen”. That is to say that one of the cache buffers is chosen and its contents discarded, so that it may serve to cache the newly accessed disk block. It will be apparent that if the chosen buffer chosen is dirty, its contents must be flushed to the disk before it is reused. In this case the application, for which a buffer is requested, must wait for two I/O operations to complete: (a) flushing the dirty buffer and (b) fetching the needed block from disk into the stolen buffer. Ideally, to prevent an application from delaying while a stolen dirty buffer is being flushed, a clean buffer would always be available to satisfy the application's request.
Prior art attempts to improve this situation have involved the notion of fresh buffers. From study of temporal locality in practical computer applications, it is known that once a cache buffer becomes dirty, it has a high probability of being further modified several times during a relatively short subsequent time interval. During this interval it is termed a “fresh” dirty buffer, and afterward it is said to be an “old” dirty buffer. The Linux operating system activates a kernel process, the daemon bdflush, when a percentage of dirty cache buffers exceeds a constant threshold limit, as determined by a parameter nfract. The daemon bdflush appears to recognizes fresh buffers to a limited extent. In a cycle of operation the daemon bdflush flushes up to a maximum number of dirty buffers, regardless of their age, as determined by another parameter ndirty. Linux also imposes a maximum age on dirty buffers, specified by the parameter age_buffer, after which they are flushed by the daemon bdflush.
SUMMARY OF THE INVENTION
It is therefore a primary object of some aspects of the present invention to improve the file management performance of computer operating systems.
It is another object of some aspects of the present invention to improve cache response time in computer operating systems utilizing asynchronous I/O operations.
It is a further object of some aspects of the present invention to spread the disk load incurred by I/O operations of a comput

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Smoothing bursts of disk load in a file system does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Smoothing bursts of disk load in a file system, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Smoothing bursts of disk load in a file system will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3223899

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.