Designing a cache with adaptive reconfiguration

Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S113000, C711S132000, C711S122000

Reexamination Certificate

active

06745295

ABSTRACT:

TECHNICAL FIELD
The present invention relates to the field of cache design, and more particularly to designing a cache with adaptive reconfiguration thereby improving the performance of the cache.
BACKGROUND INFORMATION
A network server, e.g., file server, database server, web server, may be configured to receive a stream of requests from clients in a network system to read from or write to a disk, e.g., disk drive, in the network server. These requests may form what is commonly referred to as a “workload” for the network server. That is, a workload may refer to the requests that need to be serviced by the network server.
Typically, a server in a network system comprises a disk adapter that bridges the disk, e.g., disk drive, to the processing unit of the server unit. A server may further comprise a cache commonly referred to as a disk cache within the disk adapter to increase the speed of accessing data. A cache is faster than a disk and thereby allows data to be read at higher speeds. Thus, if data is stored in the cache it may be accessed at higher speeds than accessing the data on the disk.
There have been many methods in designing disk caches that seek to increase the cache hit rate thereby improving performance of the disk cache. A “cache hit” is said to occur if an item, e.g., data, requested by the processor in the server or a client in a network system, is present in the disk cache. When an item, e.g., data, requested by the processor in the server or a client in the network system, is not present in the cache, a “cache miss” is said to occur. A “cache hit rate” may refer to the rate at which cache hits occur. By improving the cache hit rate, the performance of the cache may be improved, i.e., less data needs to be serviced from the disk.
One method to improve the performance of a disk cache is commonly referred to as the Least Recently Used (LRU) replacement method as illustrated in FIG.
1
. The LRU replacement method uses a single stack
101
comprising a set of cache entries where each cache entry stores particular data. As stated above, if an item, e.g., data, requested by the processor in the server or client in a network system is present in the cache, a “cache hit” is said to occur. When a cache hit occurs, the cache entry comprising the information, e.g., data, requested moves to the first stack position as illustrated in FIG.
1
. As stated above, if an item, e.g., data, requested by the processor in the server or client in a network system is not present in the cache, a “cache miss” is said to occur. When a cache miss occurs, the requested item is retrieved from the disk and then stored in the first stack position as illustrated in FIG.
1
. When a new entry is inserted in stack
101
, the cache entry in the last stack position of stack
101
is evicted. The information, e.g., data, may subsequently be discarded.
Another method to improve the performance of a disk cache is commonly referred to as the Segmented LRU (S-LRU) replacement method as illustrated in FIG.
2
. The S-LRU replacement method may use two stacks
201
A-B. Each stack, stack
201
A-B, may comprise a set of cache entries where each cache entry stores particular instructions and data. When a cache hit occurs in the first stack, e.g., stack
201
A, the cache entry comprising the information, e.g., data, requested moves up to the first stack position of the second stack, e.g., stack
201
B, as illustrated in FIG.
2
. When a new entry is added to stack
201
B, the cache entry at the last stack position of stack
201
B is evicted to the first stack position of stack
201
A. When a new entry is inserted in stack
201
A, the cache entry at the last stack position of stack
201
A is evicted. The information, e.g., data, may subsequently be discarded. When a cache hit occurs in the second stack, e.g., stack
201
B, the cache entry comprising the information, e.g., data, requested moves up to the first stack position of that stack, e.g., stack
201
B, as illustrated in FIG.
2
. When a new entry is inserted in stack
201
B, the cache entry at the last stack position of stack
201
B is evicted to the first stack position of stack
201
A. When a new entry is inserted in stack
201
A, the cache entry at the last stack position of stack
201
A is evicted. When a cache miss occurs, the requested item is retrieved from the disk and then stored in the first stack position of the first stack, e.g., stack
201
A, as illustrated in FIG.
2
. When a new entry is inserted in stack
201
A, the cache entry at the last stack position of stack
201
A is evicted. The information, e.g., data, may subsequently be discarded.
Unfortunately, these methods of cache design focus on static techniques instead of adaptive techniques. For example, the length of the stacks in these caches do not adapt, i.e., change in size, to changes in the request stream. Consequently, these methods do not efficiently use memory space since the cache is not designed based on adaptive techniques. If the memory space was efficiently used, then the cache hit rate may be improved.
It would therefore be desirable to develop a cache based on adaptive techniques thereby improving performance of the cache, i.e., improving the cache hit rate.
SUMMARY
The problems outlined above may at least in part be solved in some embodiments by designing a cache array reconfigurable based on tracking the changes in the request stream, i.e., workload.
In one embodiment of the present invention, a method for reconfiguring a cache may comprise the step of creating a cache array with one or more stacks of cache entries based on a workload. Each stack may be associated with a particular frequency count. That is, each cache entry in that particular stack has a frequency count of at least the frequency count associated with that particular stack. A frequency count may indicate the number of times the information, e.g., data, in the associated cache entry was requested. The one or more stacks in the cache array may then be ordered in an array from most frequently used to least frequently used based on the frequency counts associated with the one or more stacks. The cache entries in each particular stack may be ordered from most recently used to least recently used based on a logical time stamp indicating the time the information, e.g., data, associated with the cache entry was requested.
A workload is not static but dynamic and changes over time. As the workload changes, the cache may be reconfigured based on tracking the changes in the workload. If an item requested in the stream of new requests, i.e., changes in the request stream, is present in a particular cache entry, a “cache hit” is said to occur. When a cache hit occurs, the frequency count associated with the cache entry requested is updated, i.e., increased by one, in the cache directory associated with that cache entry. A determination may then be made as to whether the updated frequency count associated with that particular cache entry subsequently increases in number to the frequency count associated with the next higher level stack. If the updated frequency count associated with that particular cache entry subsequently increases in number to the frequency count associated with the next higher level stack, then that particular cache entry may be stored in a most recently used stack position in the next higher level stack. Upon storing the particular cache entry in the most recently used stack position in the next higher level stack, the next higher level stack subsequently expands in size by one entry. Upon moving the cache entry with an updated frequency count to the next higher level stack, the next lower level stack reduces in size by one entry.
If an item requested in the stream of new requests, i.e., changes in the request stream, is not present in a particular cache entry, a “cache miss” is said to occur. A cache array may be reconfigured when a cache miss occurs by tracking the number of cache hits in or more particular stack positions in each particular stack of the cache array during a particular dur

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Designing a cache with adaptive reconfiguration does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Designing a cache with adaptive reconfiguration, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Designing a cache with adaptive reconfiguration will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3341515

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.