Designing a cache using an LRU-LFU array

Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S129000, C711S132000, C711S136000

Reexamination Certificate

active

06748491

ABSTRACT:

TECHNICAL FIELD
The present invention relates to the field of cache design, and more particularly to designing a cache using a Least Recently Used (LRU)—Least Frequently Used (LFU) array thereby improving the performance of the cache.
BACKGROUND INFORMATION
A network server, e.g., file server, database server, web server, may be configured to receive requests from clients in a network system to read from or write to a disk, e.g., disk drive, in the network server. These requests may form what is commonly referred to as a “workload” for the network server. That is, a workload may refer to the requests that need to serviced by the network server.
Typically, a server in a network system comprises a disk adapter that bridges the disk, e.g., disk drive, to the processing unit of the server unit. A server may further comprise a cache commonly referred to as a disk cache within the disk adapter to increase the speed of accessing data. A cache is faster than a disk and thereby allows data to be read at higher speeds. Thus, if data is stored in the cache it may be accessed at higher speeds than accessing the data on the disk.
There have been many methods in designing disk caches that seek to increase the cache hit rate thereby improving performance of the disk cache. A “cache hit” is said to occur if an item, e.g., data, requested by the processor in the server or a client in a network system, is present in the disk cache. When an item, e.g., data, requested by the processor in the server or a client in the network system, is not present in the cache, a “cache miss” is said to occur. A “cache hit rate” may refer to the rate at which cache hits occur. By improving the cache hit rate, the performance of the system may be improved, i.e., less data needs to be serviced from the disk.
One method to improve the performance of a disk cache is commonly referred to as the Least Recently Used (LRU) replacement method as illustrated in FIG.
1
. The LRU replacement method uses a single stack
101
comprising a set of cache entries where each cache entry stores particular data. As stated above, if an item, e.g., data, requested by the processor in the server or client in a network system is present in the cache, a “cache hit” is said to occur. When a cache hit occurs, the cache entry comprising the information, e.g., data, requested moves to the first stack position as illustrated in FIG.
1
. As stated above, if an item, e.g., data, requested by the processor in the server or client in a network system is not present in the cache, a “cache miss” is said to occur. When a cache miss occurs, the requested item is retrieved from the disk and then stored in the first stack position as illustrated in FIG.
1
. When a new entry is inserted in stack
101
, the cache entry in the last stack position of stack
101
is evicted. The information, e.g., data, may subsequently be discarded.
Another method to improve the performance of a disk cache is commonly referred to as the Segmented LRU (S-LRU) replacement method as illustrated in FIG.
2
. The S-LRU replacement method may use two stacks
201
A-B. Each stack, stack
201
A-B, may comprise a set of cache entries where each cache entry stores particular data. When a cache hit occurs in the first stack, e.g., stack
201
A, the cache entry comprising the information, e.g., data, requested moves up to the first stack position of the second stack, e.g., stack
201
B, as illustrated in FIG.
2
. When a new entry is added to stack
201
B, the cache entry at the last stack position of stack
201
B is evicted to the first stack position of stack
201
A. When a new entry is inserted in stack
201
A, the cache entry at the last stack position of stack
201
A is evicted. The information, e.g., data, may subsequently be discarded. When a cache hit occurs in the second stack, e.g., stack
201
B, the cache entry comprising the information, e.g., data, requested moves up to the first stack position of that stack, e.g., stack
201
B, as illustrated in FIG.
2
. When a new entry is inserted in stack
201
B, the cache entry at the last stack position of stack
201
B is evicted to the first stack position of stack
201
A. When a new entry is inserted in stack
201
A, the cache entry at the last stack position of stack
201
A is evicted. The information, e.g., data, may subsequently be discarded. When a cache miss occurs, the requested item is retrieved from the disk and then stored in the first stack position of the first stack, e.g., stack
201
A, as illustrated in FIG.
2
. When a new entry is inserted in stack
201
A, the cache entry at the last stack position of stack
201
A is evicted. The information, e.g., data, may subsequently be discarded.
Unfortunately, these methods of cache design do not effectively configure a cache to handle the workload requests efficiently. That is, these methods do not efficiently use memory space thereby improving the cache hit rate since the cache is not designed based on an analysis of the workload.
It would therefore be desirable to develop a cache based on an analysis of the workload thereby improving performance of the cache, i.e., improving the cache hit rate, using a Least Recently Used (LRU)—Least Frequently Used (LFU) array.
SUMMARY
The problems outlined above may at least in part be solved in some embodiments by designing a Least Recently Used (LRU)—Least Frequently Used (LFU) cache array based on an analysis of the workload.
In one embodiment of the present invention, a method for designing a cache may comprise the step of a server in a network system, e.g., file system, database system, receiving requests, e.g., read from or write to a disk in the server, from one or more clients. These requests may form a workload comprising the requests that need to be serviced by the server. A trace may be performed on the workload to provide information such as the frequency count for each Logical Block Address (LBA) referenced in the workload, i.e., the number of times each particular LBA was referenced. The trace may then be analyzed by grouping the LBA's with the same frequency count and determining the number of groups counted in the trace. Upon analyzing the trace, an LRU-LFU cache may be designed based on the analysis of the trace. An LRU-LFU cache may comprise one or more stacks of cache entries where the number of stacks corresponds to the number of frequency groups counted in the trace. Each particular stack may then have a length based on the number of logical addresses with the same frequency count associated with that particular stack. Stacks may be arranged in an array from most frequently used to least frequently used. That is, the stack associated with the highest frequency count may be located at the highest level of the array and the stack associated with the lowest frequency count may be located at the lowest level of the array. The cache entries in each particular stack may be arranged from most recently used to least recently used based on a logical time stamp associated with each particular cache entry. The logical time stamp may indicate the time the information, e.g., data, in the associated cache entry was requested. Upon the storing of a new cache entry in a particular stack, a cache entry located at the least recently used stack position may be evicted. When the cache entry is evicted, the information, e.g., data, associated with the evicted cache entry may be discarded.
In another embodiment of the present invention, the cache entries evicted may be stored at the most recently used stack position in the next higher level stack except if the cache entry is located in the highest level cache of the cache array. In another embodiment of the present invention, the cache entries evicted may be stored at the most recently used stack position in the next lower level stack except if the cache entry is located in the lowest level cache of the cache array.
The foregoing has outlined rather broadly the features and technical advantages of the present invention in order that the detailed description of the invention that f

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Designing a cache using an LRU-LFU array does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Designing a cache using an LRU-LFU array, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Designing a cache using an LRU-LFU array will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3347229

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.