Electrical computers and digital data processing systems: input/ – Input/output data processing – Input/output access regulation
Reexamination Certificate
2002-06-27
2004-03-02
Perreen, Rehana (Department: 2182)
Electrical computers and digital data processing systems: input/
Input/output data processing
Input/output access regulation
C710S054000, C710S055000, C711S145000, C711S208000
Reexamination Certificate
active
06701393
ABSTRACT:
BACKGROUND OF THE INVENTION
A typical data storage system includes a controller, an input/output (I/O) cache and a set of disk drives. The I/O cache temporarily stores data received from an external host for subsequent storage in the set of disk drives, as well as temporarily stores data read from the set of disk drives for subsequent transmission to an external host. In order to efficiently coordinate the use of space within the I/O cache, the controller manages descriptors which identify and describe the status of respective memory blocks (e.g., 512 byte segments) of the I/O cache.
Some conventional approaches to managing descriptors involve the use of a memory construct called a Least-Recently-Used (LRU) queue. In one conventional approach (hereinafter referred to as the single-queue approach), each descriptor (i) is an entry of an LRU queue, and (ii) resides at a location within the LRU queue based on when the memory block identified by that descriptor (i.e., by that LRU entry) was accessed (e.g., a lookup operation) relative to other the blocks identified by the other descriptors (i.e., by the other LRU entries). In particular, the descriptor at the tail (or beginning) of the LRU queue identifies the most recently accessed block of the I/O cache, the next descriptor identifies the next most recently accessed block, and so on. Accordingly, the descriptor at the head (or end) of the LRU queue identifies the least recently used block of the I/O cache.
During operation, the controller reuses descriptors from the head of the LRU queue in response to cache miss operations. In particular, when the controller needs to move non-cached data into the I/O cache due to a cache miss, the controller (i) moves the non-cached data into the memory block of the I/O cache identified by the descriptor at the head of the LRU queue (i.e., the least recently used block of the I/O cache), and (ii) moves the descriptor from the head to the tail of the LRU queue to indicate that the identified block is now the most recently used block of the I/O cache.
In response to a cache hit, the data already resides in a block of the I/O cache. Accordingly, the controller simply moves the descriptor identifying that block from its current location within the LRU queue (e.g., perhaps in the middle of the LRU queue) to the tail of the LRU queue to indicate that the identified block is now the most recently used block of the I/O cache.
Another conventional approach to managing descriptors uses multiple LRU queues. In this approach (hereinafter referred to as the multi-queue approach), each descriptor (i) identifies a memory block of the I/O cache, and (ii) includes a cache hit field which stores the absolute number of cache hits which have occurred on that block. A first LRU queue includes descriptors to I/O cache blocks having a minimal number of hits (e.g., one or two cache hits). Other queues include descriptors to I/O cache blocks having higher numbers of hits.
During operation, the controller responds to cache misses by (i) pulling descriptors from the head of the first LRU queue to identify the least recently used blocks of the I/O cache for caching new data, (ii) updating the contents of the cache hit fields of the descriptors and (iii) placing the descriptors at the tail of the first LRU queue. In response to cache hits on I/O cache blocks, the controller updates the contents of the cache hit fields of the descriptors identifying those blocks and moves those blocks to the tails of the LRU queues based on results of a queue priority function. Further details of how the multi-queue approach works will now be provided with reference to the following example.
Suppose that a particular multi-queue approach uses four LRU queues which are numbered “0”, “1”, “2” and “3” to correspond to results of a queue priority function as will now be explained in further detail. In response to a cache miss operation, the controller (i) pulls a descriptor from the head of the first LRU queue, (ii) writes the non-cached data to the block identified by that descriptor, (iii) initializes the contents of a cache hit field of that descriptor to “1”, and (iv) pushes that descriptor onto the tail of the first LRU queue. Since that descriptor is no longer at the head of the first LRU queue, that descriptor no longer identifies the least recently used block of the I/O cache.
After the passage of time and/or the occurrence of other I/O cache operations, the location of that descriptor within the first LRU queue may shift (e.g., that descriptor may migrate to the middle of the first LRU queue due to other descriptors being added to the tail of the first LRU queue in response to caching operations). In response to a subsequent cache hit on the block identified by that descriptor, the controller (i) increments the contents of the cache hit field of that descriptor, (ii) performs a queue priority function on the incremented contents to provide a queue priority function result, and moves that descriptor to a new location based on the queue priority function result. For example, suppose that the contents of the cache hit field of that descriptor is still “1” and that the queue priority function is log
2
(“contents of the cache hit field”). In response to a cache hit on the block identified by that descriptor, the controller increments the contents of the cache hit field from “1” to “2” (indicating that there has now been one additional cache hit that has occurred on the block identified by that descriptor), generates a queue priority function result (e.g., log
2
(1) is “0”), and moves the descriptor to a new location of the multiple queues (e.g., from the middle of the first LRU queue to the tail of the first LRU queue) based on the queue priority function result.
It should be understood that, over time, the contents of the cache hit fields of the descriptors can increase to the point in which the queue priority function results direct the controller to move the descriptors to the tails of LRU queues other than the first LRU queue. For instance, if the incremented contents of a descriptor equals two, the result of the queue priority function is “1” (e.g., log
2
(2) is “1”), and the controller moves that descriptor from the first LRU queue to the second LRU queue. Similarly, while a descriptor resides in the second LRU queue, if the number of cache hits reaches the next log
2
barrier (i.e., four), the controller moves that descriptor from the second LRU queue to a third LRU queue, and so on. Accordingly, in the multi-queue approach, the controller is configured to promote descriptors from each LRU queue to an adjacent higher-level LRU queue based on increases in the number of hits on the block identified by that descriptor.
It should be understood that the controller is also configured to demote descriptors from each LRU queue to an adjacent lower-level LRU queue as the descriptors reach the heads of that LRU queue and a lifetime timer expires. For example, when a descriptor reaches the head of the third LRU queue, the controller demotes that descriptor to the tail of the next lowest LRU queue, i.e., the tail of the second LRU queue. Similarly, when a descriptor reaches the head of the second LRU queue, the controller demotes that descriptor to the tail of the first LRU queue. Finally, as mentioned earlier, the controller reuses the descriptors at the head of the first LRU queue, which identify the least recently used blocks of the I/O cache, in response to cache misses.
In both the single-queue and multi-queue approaches, the descriptors within the LRU queues are typically arranged as doubly-linked lists. That is, each descriptor includes a forward pointer which points to the adjacent preceding descriptor in an LRU queue, and a reverse pointer which points to the adjacent succeeding descriptor in the LRU queue. When the controller moves a descriptor from the middle of an LRU queue to the tail of the same LRU queue or a new LRU queue, the controller performs multiple linked list operations. These linked list operations will now be described
Kemeny John
Qui Naizhong
Shen Xueying
Chapin & Huang , L.L.C.
EMC Corporation
Huang, Esq. David E.
Mai Rijue
Perreen Rehana
LandOfFree
Systems and methods for managing storage location descriptors does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Systems and methods for managing storage location descriptors, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Systems and methods for managing storage location descriptors will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3289404