Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories
Reexamination Certificate
2002-01-22
2003-09-02
Bragdon, Reginald G. (Department: 2188)
Electrical computers and digital processing systems: memory
Storage accessing and control
Hierarchical memories
Reexamination Certificate
active
06615318
ABSTRACT:
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to cache management systems. More particularly, the invention concerns a cache management system that utilizes multiple cache lists (such as least recently used lists), where a cache manager always removes entries from a “current removal” list (and demotes or destages the corresponding data from cache) until that list is exhausted and another list rotates into the function of current removal list. Depending upon their priority, new cache entries are added nearer or farther from the current removal list in a prescribed order of rotating the role of current removal list.
2. Description of the Related Art
Many different cache memory management systems are being used today. Broadly, “cache” is a special-purpose buffer storage, smaller and faster than main storage, used to hold a copy of those instructions and data likely to be needed next by a processor. Cache is often used to hold frequently accessed instructions and data that reside in slower storage such as disk or tape, thereby reducing access time. Data storage systems with cache therefore operate more efficiently. To run a cache-equipped data storage system at optimal performance, it is crucial to operate the cache as efficiently as possible.
In developing the optimal cache management strategy, there are many different considerations, and planning can get complicated quickly. Therefore, despite great effort in developing new cache management techniques, a number of common problems still exist. For example, many cache management schemes are not as expeditious as possible because, when a decision is made that cached data must be demoted, significant analysis is required to choose the cached data to be demoted. Most cache systems use a least recently used (LRU) or least frequently used (LFU) list as the basis for determining when cache entries are getting stale. A problem can occur, however, when multiple processes try to access cache and its LRU/LFU lists at the same time, since the first (winning) process to access the LRU/LFU list will typically lock the list to exclude other (losing) processes. The losing processes are therefore delayed. Another problem is that some cached data may tend to be removed from the cache prematurely despite the user placing a higher priority on that data relative to other cached data. By the same token, other cached data may reside in cache for an excessive period despite the user placing a lower priority on that data relative to other cached data.
Since customers seek faster and more efficient cache systems, and such products enjoy an advantage in the marketplace, engineers at IBM Corporation are continually researching possible improvements to overcome these and other limitations of known cache management systems.
SUMMARY OF THE INVENTION
Broadly, the present invention concerns a cache management system that utilizes multiple cache lists (such as LRU lists), where a cache manager always removes entries from a “current removal” list (and also demotes their counterparts in cache) until that list is exhausted and another list rotates into the function of current removal list. A separate set of cache lists are utilized for destaging, with the cache manager always removing entries from a different current removal list (and also destaging their counterparts from cache) until that list is exhausted and another list rotates into the function of current removal list. Unlike demotion, destaging does not remove the cache list entries' counterpart data items from cache. In each set of lists (demotion or destaging), new cache entries are made when data items are added to cache, the cache list entries being added nearer or farther from the current removal list (in a prescribed order of rotating the role of current removal list) according to their priority.
To set up the system, the following operations are performed. Initially, a number of cache lists are established to store cache list “entries,” each entry naming a data item present in cache storage. A designated sequence is established for progressing through the lists. Initially, one of the lists is designated as a current removal list. Now the system is ready to go. Responsive to a predetermined action condition (such as cache storage becoming full), a cache manager identifies the least recently (or frequently) used cache list entry of the current removal list. In one embodiment, used for cache grooming, the cache manager deletes this cache list entry and updates the cache storage by removing the data item represented by the deleted entry. In another embodiment, used to memorialize old data, the cache manager utilizes a separate set of lists, where a selected cache list entry is removed from the current removal list and its counterpart data item destaged from cache storage to longer term storage. Whenever the current removal list is empty, the cache manager designates the next list in the sequence to be the current removal list.
Whenever a data item is cached, the cache manager adds a cache list entry naming the data item into one of the lists. This is achieved by determining a priority ranking of the data item on a scale having as many levels as the number of the lists (or the number of lists minus one, depending upon whether the current removal list is available for additions); the cache manager adds a cache list entry naming the data item to the list that is spaced in the sequence of progression beyond the current removal list by the number of the priority ranking.
The foregoing features may be implemented in a number of different forms. For example, the invention may be implemented to provide a method of cache management. In another embodiment, the invention may be implemented to provide an apparatus such as a data storage system with a cache managed as described herein. In still another embodiment, the invention may be implemented to provide a signal-bearing medium tangibly embodying a program of machine-readable instructions executable by a digital data processing apparatus to manage cache storage as discussed herein. Another embodiment concerns logic circuitry having multiple interconnected electrically conductive elements configured to manage cache as described herein.
The invention affords its users with a number of distinct advantages. Chiefly, the invention implements a method of managing relative priorities among cached data. This permits the user of the caching subsystem to retain more important data for longer in cache and to demote less important data earlier, thereby using cache storage with optimal efficiency. Similarly, when implemented to manage destaging, this processes permits the user of the caching subsystem to accelerate destaging of certain data (such as critical data) and delay destaging of other data (such as less critical data). As another advantage, the invention is efficient in its cache management, since the decision of the cached data to demote or destage is made rapidly. Namely, when a decision is made to demote/destage, the cache manager need only review the current removal list to identify the cache list entry to demote/destage. Another benefit of the invention is that it reduces lock contention, since there are multiple, individually locked cache lists rather than a single LRU cache list. Thus, even though one application has a lock on a particular cache list, another application may still write to other cache lists. Contention may be further reduced by implementing an optional feature of the invention, where additions are not permitted to the current removal list. Thus, applications seeking to remove a cache list entry do not complete with applications trying to add a cache list entry. The invention also provides a number of other advantages and benefits, which should be apparent from the following description of the invention.
REFERENCES:
patent: 4571674 (1986-02-01), Hartung
patent: 5434992 (1995-07-01), Mattson
patent: 5627990 (1997-05-01), Cord et al.
patent: 5636359 (1997-06-01), Beardsley et al.
patent: 5734861 (1998-03-
Jarvis Thomas Charles
Lowe Steven Robert
McNutt Bruce
Bragdon Reginald G.
Dan Hubert & Assoc.
Namazi Mehdi
LandOfFree
Cache management system with multiple cache lists employing... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Cache management system with multiple cache lists employing..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Cache management system with multiple cache lists employing... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3079829