Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories
Reexamination Certificate
2001-03-12
2003-11-25
Sparks, Donald (Department: 2187)
Electrical computers and digital processing systems: memory
Storage accessing and control
Hierarchical memories
C711S118000, C711S134000, C711S136000, C711S159000, C711S160000
Reexamination Certificate
active
06654855
ABSTRACT:
FIELD OF THE INVENTION
This invention relates to data processing and storage systems and more particularly to cache memory storage used in such systems.
BACKGROUND OF THE INVENTION
Data caching is used in virtually all systems in which information is transferred from one place to another. For example, computers constantly move data back and forth between different storage media (tapes, disks, main memory, cache memory) in accordance with the usage patterns of the data. Large capacity, relatively slow, media are used to store data that is not, and is not expected to be, of current interest. When such data is requested, it is moved from the slow media to a faster, more expensive and, consequently, more limited in capacity, medium. This process is called data caching and the use of a faster medium produces performance gains under the generally valid assumption that, once data has been accessed, it will be accessed repeatedly. This process often continues through several levels of storage hierarchy, with smaller portions of data moved into successively faster media in an attempt to reflect anticipated usage.
Since typically only a small fraction of the data kept at one level of the storage hierarchy can be held at the next higher level, and since, when new data is requested and hence moved to that higher level, other data must be moved back to the lower level to make room for this new data, the decision as to what data should be removed and what should be retained in the storage media of a given level is critical to the efficiencies to be gained from caching data in the first place.
The prior art has addressed the decision as to what data should be removed and what should be retained in the storage media of a given level in one of three ways: (1) deterministic replacement in which some deterministic algorithm, not based on any access pattern, is used to identify the data page to be removed; (2) random replacement, in which the page to be removed is selected using some pseudo-random process; and (3) least-recently-used (LRU) replacement in which the page that has been least recently accessed is selected for removal.
The relative ineffectiveness of either of the first two of these procedures is obvious. The LRU method, while considerably better than the other two alternatives, suffers from the fact that a heavily used, but temporarily inactive, data page can be, and frequently is, removed in deference to a more recently accessed page that is rarely used.
A related issue is the amount of data that is transferred from higher level storage to a lower level when new data is requested. This amount of data is usually referred to as a cache “page.” Generally, the slower the media, the larger the cache page that is transferred to be cached at the next higher level for reasons having to do with the relationship between the time needed to access the first data element in the page and the time needed to transfer subsequent data elements in that page once the first element has been accessed. Once a page is stored on the lower level medium, it a request is made for any data on that page, the entire page is transferred back from the lower level storage to the cache memory.
While the optimum cache page size is largely a function of the medium storage speed at the storage level immediately below the cache memory, the structure of the data within a page strongly affects the likelihood that all or most of the data on that page will be subsequently required. Ideally, data that is transferred in each cache page should be highly correlated in the sense that, if some of the data is needed, then most likely much of the rest of that data on the page will also be needed. This is, in fact, often the case. For example, files stored on disk storage are frequently considerably larger than the pages into which these files are partitioned for storage purposes. File systems often take advantage of this fact by anticipating that pages adjacent to the just requested page are going to be subsequently requested. The file systems then transfer these adjacent pages to higher-level storage before they are actually requested.
However, in many cases, the opposite is true; the size of the data structures being stored (often referred to as cache “lines”) is small compared the size of the page into which they are grouped. In this event, only a portion of the transferred data page may actually be accessed with the rest taking up valuable cache memory space that could more profitably be used for other data structures. Even so, there may exist hidden correlations among these data structures that can only be discerned by observing their access patterns; that is, if two apparently independent data structures tend to be accessed at roughly the same time with roughly the same frequency, they may be correlated. When this is the case, caching efficiencies can obviously be gained by grouping these correlated structures into the same cache page for storage.
One important example in which the data structures of interest are generally small compared to the pages into which they are grouped for storage purposes are data structures for storing file system “metadata”, that is, information describing the attributes of the various files and directories comprising a file system. Cache pages meant for disk storage typically contain of the order of eight to sixteen metadata data structures. These structures tend to be grouped into pages based, for example, on the names of their associated files or directories, or on the chronological order in which they were created. Consequently, they are unlikely to be correlated.
Grouping of data structures for paging purposes has been largely ignored in the prior art, possibly because most data structures do exceed the size of a single page so the data elements within a page are naturally correlated. As noted, however, there are important exceptions to this rule that can significantly affect the efficiency with which such data structures are cached.
Therefore, there exists a need for a method and apparatus for determining which cache lines to remove from the cache memory in order to make room for new data and for grouping correlated data into pages in order to permit efficient caching of the data.
SUMMARY OF THE INVENTION
In accordance with the principles of the invention, a time-weighted metric is associated with each cache line that is being held in a data cache. The value of the metric is re-computed as the lines are accessed and the metric value is used to group cache lines for paging purposes. The use of the metrics increases the likelihood that the most active lines are held in the cache so long as they remain active.
The metrics are computed and stored and the stored metrics are maintained by linking the storage locations together in several linked lists that allow the metrics to be easily manipulated for updating purposes and for determining which metrics represent the most active cache lines.
In accordance with one embodiment, the operation of the cache memory to is broken into time intervals, each interval having a predetermined duration. An ordered linked list is maintained of the time-weighted metrics associated with the cache lines. This ordered linked list consists of a connected set of sub-chains, wherein the sub-chains are linked in order of the time duration in which their entries were last accessed and the entries within each sub-chain are ordered in accordance with their associated metrics. One or more separate lists are used to group the entries accessed during a by time interval according to their metrics, with any two entries having the same metric ordered most-recently-accessed first. In particular, the sub-chains are indexed by these lists. When a time interval ends, all the chains indexed in the lists are linked together in the order of their metrics and placed at the top of the ordered linked list.
In accordance with another embodiment, the ordered linked list is traversed in order to determine which cache lines to remove when additional space is needed in the cache memory.
REFERE
Bopardikar Raju C.
Stiffler Jack J.
Dinh Ngoc
EMC Corporation
Kudirka & Jobse LLP
Sparks Donald
LandOfFree
Method and apparatus for improving the efficiency of cache... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Method and apparatus for improving the efficiency of cache..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and apparatus for improving the efficiency of cache... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3122888