Maximizing sequential read streams while minimizing the...

Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S118000, C711S154000, C711S207000

Reexamination Certificate

active

06633957

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
This invention relates to computing systems and to handling sequential read streams in the computing systems. More particularly, the invention relates to optimizing system performance during disk sequential read streams while minimizing the impact of such optimization on cache operations and other non-sequential read applications of the computing system.
2. Description of the Related Art
In a computing system having cache memory and large volume storage devices, such as disk drives and tape drives, it is desirable to transfer information from a large volume storage device to cache memory. Relative to the speed of the computer processor, the time to access a record in a large volume storage device is very slow while the time to access a record in cache memory is quite fast. Where the application program being run by the computing system is using sequential records, the performance of the system is enhanced by prefetching records from a large volume storage drive such as a disk drive and loading these records in cache memory just prior to a request for the records from the processor. Then when the read record request is received from the processor, the record is rapidly read from cache.
The prefetching of records from a large volume storage device is known to have three problems. The first problem is determining under what conditions the system should perform a prefetch. Since prefetching is most effective when reading sequential records, the first problem is really how to determine that the system is reading sequential records. The second problem is determining the size of the record data block to be prefetched. Prefetching data from the disk drive loads down the disk drive relative to access to the drive by other applications. Therefore, the time spent in prefetching should be as small as possible, or in other words, how small can the number of prefetched data blocks be and still accomplish the prefetch goals. The third problem is determining how long should prefetched data remain in cache. If the cache is loaded with large volumes of prefetched sequential records, then random access records for other applications are squeezed out of cache memory.
SUMMARY OF THE INVENTION
In accordance with this invention, the above problems have been solved by initiating a prefetch of a number of records from the storage devices for transfer to the cache in order to return requested records to the host computer in response to a read request from the host computer. If a previous prefetch is not complete when the read request is received, the number of records in a next prefetch of records is increased by a preset amount. If a previous prefetch is complete, a next prefetch of records is initiated with the same number of records in the prefetch as the previous prefetch. The initiation of prefetch operations is triggered by detection of a sequential read stream in a plurality of read requests from the host computer. When the prefetch size is increased, the preset amount of the increase is the number of records in the read request from the host computer. After requested records are returned from the cache to the host computer in response to the read request, storage space in the cache for the returned requested records is released.
One great advantage and utility of the present invention is that prefetch operations are performed only when they are most effective i.e. for sequential read streams. Also, the prefetch size is controlled so that it tracks the need to respond to the host and does not increase out of control. Further, the cache memory space is preserved by releasing cache space after prefetch data for a sequential read has been passed from cache to host computer.
The foregoing and other features, utilities and advantages of the invention will be apparent from the following more particular description of a preferred embodiment of the invention as illustrated in the accompany drawings.


REFERENCES:
patent: 5473764 (1995-12-01), Chi
patent: 5544342 (1996-08-01), Dean
patent: 5553276 (1996-09-01), Dean
patent: 5623615 (1997-04-01), Salem et al.
patent: 5682500 (1997-10-01), Vishlitzky et al.
patent: 5692168 (1997-11-01), McMahan
patent: 5758076 (1998-05-01), Wu et al.
patent: 5778435 (1998-07-01), Berenbaum et al.
patent: 5996071 (1999-11-01), White et al.
patent: 6012106 (2000-01-01), Schumann et al.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Maximizing sequential read streams while minimizing the... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Maximizing sequential read streams while minimizing the..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Maximizing sequential read streams while minimizing the... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3126551

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.