Electrical computers and digital processing systems: memory – Storage accessing and control – Specific memory composition
Reexamination Certificate
1999-01-27
2001-03-27
Yoo, Do (Department: 2185)
Electrical computers and digital processing systems: memory
Storage accessing and control
Specific memory composition
C711S112000, C360S071000, C360S072100, C360S072300
Reexamination Certificate
active
06209058
ABSTRACT:
FIELD OF THE INVENTION
The present invention relates generally to cache systems, and, more particularly, to a cache manager for transferring data between a data disk and a cache buffer.
BACKGROUND
A cache buffer is a high speed memory buffer inserted between a host system and a storage device, such as a disk drive, to store those portions of the disk drive data currently in use by the host. Since the cache is several times faster than the disk drive, it can reduce the effective disk drive access time. A typical disk drive includes a data disk having a plurality of concentric data tracks thereon, a spindle motor for rotating the data disk, and a transducer supported by an actuator controlled carrier for positioning the transducer over the data tracks.
A firmware cache manager controls transfer of data from the disk drive into the cache buffer, and manages the data stored in the cache buffer. A typical cache manager utilizes a cache directory containing data block memory addresses, and control bits for cache management and access control. The cache manager searches the cache directory to fetch and store data blocks in the cache buffer, and uses a replacement strategy to determine which data blocks to retain in the cache buffer and which to discard.
In response to a data read request from a host, the cache manager directs the actuator to position the transducer over a selected data track containing the requested data. However, reading data is delayed until the portion of the selected track containing the requested data rotates under the transducer. This delay degrades cache performance and increases data transfer response time.
In order to increase the hit ratio in the cache buffer, typical cache managers utilize a read-ahead strategy in retrieving the requested data from the selected track. The cache manger defines a data segment on the selected track, including a fetch area containing the requested data followed by a post-fetch data area. The cache manager first reads the requested data from the fetch area and then continues reading ahead to the end of the post-fetch area unless interrupted by another data transfer request.
To store the retrieved data into the cache buffer, the cache manager allocates and trims a cache segment in the cache buffer, comparable in size to that of data segment on the selected track. However, in doing so, the cache manager effectively discards all data in the allocated cache segment before reading any data from the selected track. Such an allocation and trimming method drastically reduces the hit ratio of the cache system and results in performance degradation. Since reading data from the post-fetch data area of the data segment must be interrupted to service any subsequent data transfer request, in many instances, only a portion of the data in the post-fetch area is retrieved and stored in a corresponding portion of the cache segment. As such, only a portion of the data in the cache segment is actually overwritten, and the pre-existing data in the remaining portion of the cache segment need not have been discarded. Any future reference to the pre-existing data results in a cache miss, requiring the cache manager to access the disk and retrieve that data again. However, disk access delays severely degrade the performance of the cache system and result in general degradation of the host performance.
There is, therefore, a need for a method of data transfer in a cache system which increases the cache hit ratio without degrading the cache performance due to disk access delays.
SUMMARY
The present invention satisfies these needs. In one embodiment, the present invention provides a method of data transfer in a cache system comprising a cache buffer including a plurality of data blocks for storing data, and a cache manager for retrieving data from a disk drive and storing the data into the cache buffer. The disk drive includes a data disk having a plurality of concentric data tracks thereon, a spindle motor for rotating the data disk, and a transducer supported by a carrier for positioning the transducer over individual data tracks to write data thereto or read data therefrom.
In one embodiment, a method of data transfer in response to a request for retrieving a set of data blocks from the selected track, comprises the steps of: (a) defining a data segment on the selected track, wherein the data segment comprises, in sequence, a pre-fetch data area, a fetch data area comprising said set of data blocks, and a post-fetch data area; (b) determining a landing position of the transducer over the selected track relative to the data segment; and (c) controlling transfer of data from the data segment to the cache buffer based on said landing position relative to the data segment. If said landing position is outside the data segment, data transfer includes delaying reading data from the data segment until the pre-fetch data area rotates under the transducer, thereafter commencing reading data from the pre-fetch area, otherwise, commencing reading data from said landing position in the data segment without delay.
If the landing position is within the postfetch area, the data transfer further includes continuing reading data from the data segment until the end of the data segment, and thereafter, ceasing reading data from the data segment until the beginning of the data segment rotates under the transducer, then commencing reading data from the beginning of the data segment to at least the end of the fetch area. If the landing position in the data segment is at or before the beginning of the fetch area, data transfer further includes continuing reading data from said landing position to at least the end of the fetch data area. If the landing position is within the fetch data area, data transfer further includes continuing reading data from the data segment until the end of the data segment, and thereafter, ceasing reading data from the data segment until the beginning of the data segment rotates under the transducer, then commencing reading data from the beginning of the data segment to at least said landing position within the fetch data area.
The size of the pre-fetch data area is selected as a function of the size of the cache buffer to maximize a hit ratio of the data in the cache buffer. Similarly, the size of the post-fetch data area as a function of the size of the cache buffer to maximize a hit ratio of the data in the cache buffer. Storing the retrieved data in the cache buffer according to the present invention includes: (a) allocating a cache segment in the cache buffer for storing data read from the data segment, (b) overwriting at least a portion of the cache segment with data read from the data segment, and (c) deallocating any remaining portion of the cache segment not overwritten with data from the data segment. The step of allocating the cache segment can comprise selecting a size for the cache segment at most equal to the size of the data segment.
In another embodiment, a method of data transfer in response to a request for retrieving a set of data blocks from the selected track, includes the steps of: (a) defining a data segment comprising, in sequence, a pre-fetch data area spanning a portion of a preceding track to the selected track and a portion of the selected track, a fetch data area comprising said set of data blocks on the selected track, and a post-fetch data area on the selected track, (b) determining a landing position of the transducer over the selected track relative to the data segment, and (c) controlling transfer of data from the data segment to the cache buffer based on said landing position relative to the data segment. If said landing position is inside the data segment, data transfer includes commencing reading data from said landing position in the data segment without delay. Otherwise, data transfer includes positioning the transducer over said preceding track, determining the position of the transducer over the preceding track relative to the data segment, determining if the transducer position is within the pre-fetch area,
Bagachev Iouri
Bui Luong-Duc
Shats Serge
Kim Hong
Quantum Corp.
Yoo Do
Zarrabian Michael
LandOfFree
Cache management for data transfer control from target disk... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Cache management for data transfer control from target disk..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Cache management for data transfer control from target disk... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2480701