Adaptive data insertion for caching

Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S137000, C711S144000, C711S204000, C711S213000

Reexamination Certificate

active

06728837

ABSTRACT:

FIELD OF THE INVENTION
The invention pertains to the field of computer systems. More particularly, this invention relates to data caching techniques.
BACKGROUND OF THE INVENTION
A typical computer system includes one or more data caches. A data cache is a portion of computer memory that is used to hold (typically) part of the data that is also held on a second computer memory that is typically slower than the memory used for the data cache.
For example, many computer systems contain one or more host systems and one or more storage systems. A storage system usually provides relatively large-scale, non-volatile storage of information that may be accessed by one or more host systems. A host system typically accesses a storage system by performing write and read operations to and from the storage system via a communication link between the host and storage systems.
A typical host system includes a host processor and a host cache (i.e., a higher-level cache). A typical host cache temporarily holds information obtained from a storage system and provides the host processor with relatively fast access to information held in the host cache. A storage system commonly includes a storage medium and a storage system cache. A typical storage system cache (i.e., a lower-level cache) temporarily holds information obtained from the storage medium and provides one or more host systems with relatively fast access to the information contained in the storage system cache. A shared, lower-level cache may include, but is not limited to, a block cache or a file system cache (e.g., a disk array block cache or a file server cache; either may shared by multiple client machines having their own cache) or a memory cache attached to a memory board shared by several central processing unit (CPU) caches in a multiprocessor system.
In general, data is put into a cache so that it can be retrieved (referenced) again in the future, more rapidly than accessing the data from the backing store, which is typically slower to access than the cache memory. Since the size of the cache (i.e., the amount of data it can hold) is typically limited—and much smaller than the lower level data store—it is important that the data kept in the cache is (as far as possible) the data most likely to be re-referenced. This is accomplished by the combination of an insertion policy (which things are to be placed into the cache) and a replacement policy (which things are to be dropped from the cache to make space for new data). A commonly used replacement policy (or replacement algorithm) is to manage the data in the cache by a technique known as Least Recently Used (or LRU)—data that has not been referenced for the longest time is the first to be discarded when new space is needed.
A commonly used insertion policy is to treat all data that is inserted into the cache as likely to be of equal future value (i.e., equally likely to be re-referenced in the time it spends in the cache); thus new data is added at the end furthest away from the “next to be dropped” end of the replacement policy. It is common to implement the resulting cache management data structure (or cache metadata) as a list of pointers to the blocks: at one end (the head) are the pointers to the blocks next to be discarded; at the other end (the tail) are the blocks most recently referenced or inserted. Those skilled in the art will recognize that there are many possible structures that can be used for this metadata, and this is just one example.
Unfortunately, some data in a cache are never referenced during their time in the cache. (The cache lifetime, or cache residency time, is a measure of how long the data resides in the cache before being selected for replacement.) This wastes space in the cache that could have been used to hold other data that is referenced in the future, and thus reduces the effectiveness of the cache. A mechanism that could selectively decide which blocks are worth inserting in the cache (i.e., which ones are more likely to be referenced more often during their lifetime in the cache) could increase the effectiveness of the cache.
Additionally, some data added to a cache is either referenced quickly, or not at all. This means that a mechanism that can selectively decide to insert data into the cache in a way that limits its cache lifetime in the absence of a reference will also enable the space taken up by the data to be reused more quickly, and thus increase the cache's effectiveness.
When multiple clients share a common cache, it has proved difficult to achieve appropriate choices for which data from which client are most likely to be referenced in the future—either by the same client or by other clients. Thus, a mechanism that can provide preferential treatment for cache clients that insert data that is more likely to be referenced is likely to result in a more effective cache.
Data is inserted into a cache by many processes: it can be read in response to a client read from the lower-level store; it can be written to by a client; it can be read-ahead in anticipation that it might soon be requested by a client; and it might be “demoted” by a client from its own cache to the lower-level cache, in case more space is available there. In all cases, the data is retained in the cache in the hope that it will be referenced. However, existing approaches are unable to exploit the fact that different processes may result in cached data with different value—i.e., different likelihood of being referenced in the future.
Similarly, it may also be the case that different clients, using the same insertion processes, may insert data into the cache that has different likelihood of being referenced, yet existing approaches do not distinguish these cases.
All these diverse processes for placing data in the cache are called “insertion sources” in what follows. Those skilled in the art will recognize that many other possibilities for identifying insertion sources exist, and can, be employed in what follows.
As a consequence, methods are needed to identify which data is most likely to be re-referenced in the future, and which insertion sources are most likely to insert data into a cache. Furthermore, methods are needed to guide the replacement policy to selectively favor deleting data that is not likely to be referenced from the cache, and such methods are needed in situations where there are multiple clients sharing a common cache.
SUMMARY OF THE INVENTION
An embodiment of a method for storing data in a cache is disclosed. The method comprises steps of creating at least one ghost cache recording hits from at least one source; calculating a merit figure associated with hits in the at least one ghost cache; and identifying an insertion point in metadata associated with the cache based on the merit figure.
Another embodiment of a method for storing data in a cache is disclosed. The method comprises steps of creating at least one ghost cache recording hits from at least one source; calculating a merit figure associated with hits in the at least one ghost cache; segmenting the cache into a plurality of sections; and identifying an insertion point in metadata associated with one of the plurality of sections based on the merit figure.
A caching system is disclosed that comprises a cache, a cache controller connected to the cache, one or more sources operable to store data in the cache, and at least one ghost cache recording hits from at least one of the sources. The cache controller is configured to calculate a merit figure based on a hit rate in the ghost cache and determine an insertion point in metadata associated with the cache based on the merit figure.
The methods of the present invention include steps that may be performed by computer-executable instructions executing on a computer-readable medium.
In comparison to known prior art, certain embodiments of the invention are capable of achieving certain aspects, including some or all of the following: data can be accessed faster by a cache clients, including those in multi-client systems. Those skilled in the art

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Adaptive data insertion for caching does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Adaptive data insertion for caching, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Adaptive data insertion for caching will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3211464

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.