Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories
Reexamination Certificate
1999-03-23
2001-02-06
Nguyen, Hiep T. (Department: 2759)
Electrical computers and digital processing systems: memory
Storage accessing and control
Hierarchical memories
C711S113000, C711S118000, C709S241000, C710S034000, C710S039000
Reexamination Certificate
active
06185659
ABSTRACT:
TECHNICAL FIELD
The present invention relates to the field of caching memory devices and methods of controlling data track prestaging based upon resource availability.
BACKGROUND ART
The software controlling a cached disk array system is often unaware of how much work is being done. It is static in determining how much of the system's resources to devote to completing that work. Data is moved between the disk arrays and the cache memory based upon fixed algorithms that do not consider the workload. Consequently, disk array systems do not always use the available cache memory, back-end disk bandwidth, or disk controller processor cycle (throughput) resources to the fullest extent possible.
Static algorithms allocate system resources broadly to allow for many threads of work to operate simultaneously. Each thread being executed is given part of the cache memory with which to work. When a thread issues a request to access data, if that data is currently buffered in the cache memory (a cache hit) it is quickly provided and the thread continues with its work. If the data is not available in the cache memory (a cache miss) then a disk controller must take the request and retrieve the data from a disk drive. Accessing a disk drive consumes controller throughput and the back-end bandwidth of the disk drive array. It also takes considerably more time than accessing the same data from the cache memory.
Under light workload conditions the performance of the disk array system, as seen by the threads requesting access to the data, is governed mainly by the percentage of cache hits and cache misses. A thread that experiences a cache miss is delayed for a limited time as there is little competition for drive controller throughput or disk array bandwidth needed to access the data from the disk array. Competition for the throughput and bandwidth increases as the workload increases. Under heavy workloads, the average access time becomes limited by either the drive controller's ability to service cache misses or the disk array's bandwidth.
Disk array system performance can be improved under all workload conditions by increasing the size of the cache memory, increasing the drive controller throughput, adding more drive controllers, and increasing the back-end bandwidth of the disk array itself. Each of these improvements requires faster hardware which translates to increased cost and power consumption.
Changes to the controlling software cannot increase the speed or capacity of the hardware resources, but it can improve performance by using those resources more efficiently. The best way to improve a caching disk system's performance is to improve its cache utilization. A cache hit is a significant performance improvement over a cache miss. By adjusting the algorithms, underutilized resources can be reallocated to allow more data tracks to be prestaged from the disk array into the cache memory. More data in the cache memory increases the probability of cache hits, and thus improves the overall performance.
DISCLOSURE OF INVENTION
The present invention is a memory system, and a method for controlling prestaging of data tracks to cache memory based upon the availability of resources within the memory system. The memory system comprises a cache memory, a resource controller, a shared memory, one or more memory devices, and one or more memory controllers. Prestage hints from an external host are provided to the resource controller that generates and stores prestage requests in the shared memory. The contents of the shared memory are also available to the memory controllers. When the resource controller determines that there is sufficient cache memory and sufficient memory device back-end bandwidth available to prestage at least one data track, it broadcasts a message to all of the memory controllers. Memory controllers not utilizing all of their throughput may accept the prestage requests from the shared memory then copy the associated data tracks from the memory devices to the cache memory. Counters are maintained in the shared memory to track the number of prestage requests in the process of being serviced, and the number of prestaged data tracks already buffered in cache memory and waiting to be accessed by the host.
This system and method provide improved performance during simple benchmark testing and during periods of low workload. The improved performance is achieved by increasing the use of cache memory, memory device back-end bandwidth, and memory controller throughput to increase the probability of a cache hit.
Accordingly, it is an object of the present invention to provide a memory system that has at least one memory device, at least one memory controller, cache memory and a resource controller. The resource controller determines when there is sufficient unused cache memory and/or unused memory device bandwidth available to service at least one prestage request. When either or both resources are available, the resource controller broadcasts a message to all of the memory controllers to service the prestage requests. Each memory controller with available throughput accepts one prestage request and then copies the associated data track from the memory devices to the cache memory.
Yet another object of the present invention is to provide a method for controlling prestaging requests in a caching memory system. The method involves calculating the available capacity of the cache memory and calculating the available bandwidth for the memory devices. When the calculated available capacity and/or bandwidth allow for at least one data track to be prestaged, a message is broadcast to all of the memory controllers in the system. When each memory controller receives the broadcast message, it determines its available throughput. When the throughput is sufficient to service at least one prestage request, the memory controller accepts one prestage request and then copies the associated data track from the memory devices to the cache memory.
These and other objects, features and advantages will be readily apparent upon consideration of the following detailed description in conjunction with the accompanying drawings.
REFERENCES:
patent: 5737747 (1998-04-01), Vishlitzky et al.
patent: 5826107 (1998-10-01), Cline et al.
patent: 6023706 (2000-02-01), Schmuck et al.
patent: 6098064 (2000-08-01), Pirolli et al.
Milillo Michael Steven
West Christopher J.
Brooks & Kushman P.C.
Nguyen Hiep T.
Storage Technology Corporation
LandOfFree
Adapting resource use to improve performance in a caching... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Adapting resource use to improve performance in a caching..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Adapting resource use to improve performance in a caching... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2601456