Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories
Reexamination Certificate
1998-08-19
2002-04-30
Kim, Matthew (Department: 2186)
Electrical computers and digital processing systems: memory
Storage accessing and control
Hierarchical memories
C711S136000, C711S113000, C711S160000
Reexamination Certificate
active
06381677
ABSTRACT:
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to a method and system for staging data from a memory device to a cache and, in particular, staging data in anticipation of data access requests.
2. Description of the Related Art
In current storage systems, a storage controller manages data transfer operations between a direct access storage device (DASD), which may be a string of hard disk drives or other non-volatile storage devices and host systems and their application programs. To execute a read operation presented from a host system, i.e., a data access request (DAR), the storage controller must physically access the data stored in tracks in the DASD. A DAR is a set of contiguous data sets, such as tracks, records, fixed blocks, or any other grouping of data. The process of physically rotating the disk in the DASD to the requested track then physically moving the reading unit to the disk section to read data is often a time consuming process. For this reason, current systems stage data into a cache memory of the storage controller in advance of the host requests for such data.
A storage controller typically includes a large buffer managed as a cache to buffer data accessed from the attached DASD. In this way, data access requests (DARs) can be serviced at electronic speeds directly from the cache thereby avoiding the electromechanical delays associated with reading the data from the DASD. Prior to receiving the actual DAR, the storage controller receives information on a sequence of tracks in the DASD involved in the upcoming read operation or that data is being accessed sequentially. The storage controller will then proceed to stage the sequential tracks into the cache. The storage controller would then process DARs by accessing the staged data in the cache. In this way, the storage controller can return cached data to a read request at the data transfer speed in the storage controller channels as opposed to non-cached data which is transferred at the speed of the DASD device.
In prior art staging systems, a fixed number of tracks may be staged. In such fixed track staging systems, often one of the tracks being staged is already in cache. Other systems provide for staging data that will be subject to a sequential read operation, otherwise known as prestaging data. An example of a sequential prestaging routine to stage multiple tracks is described in U.S. Pat. No. 5,426,761, entitled “Cache and Sequential Staging and Method,” which patent is assigned to International Business Machines Corporation (“IBM”) and which is incorporated herein by reference in its entirety. U.S. Pat. No. 5,426,761 discloses an algorithm for sequential read operations to prestage multiple tracks into cache when the tracks are in the extent range of the sequential read operation, the track is not already in the cache, and a maximum number of tracks have not been prestaged into cache.
Tracks staged into cache may be demoted according to a Least Recently Used (LRU) algorithm to insure that the staged data in cache does not exceed a predetermined threshold. Current IBM storage controllers that stage data into cache are described in IBM publications “IBM 3990/9390 Storage Control Reference,” IBM publication no. GA32-0274-04 (IBM Copyright, 1994, 1996), which publication is incorporated herein by reference and “Storage Subsystem Library: IBM 3990 Storage Control Reference (Models 1, 2, and 3)”, IBM document no. GA32-0099-06, (IBM Copyright 1988, 1994).
One problem with current staging systems is the inability to adjust the rate of staging and/or the number of tracks staged to accommodate variances in DARs. If the number of tracks staged is not carefully controlled, then too many or too few tracks will be staged into cache. Staging too few tracks will delay responding to DARs because the response to the DAR will have to wait while the requested data is staged in from DASD. On the other hand, staging too much data into cache in advance of when the data is needed will result in wasted cache space. Further, if staged data is not accessed for a considerable period of time, then the staged data may be demoted according to a LRU algorithm. In such case, the demoted staged data will not be available for the anticipated DAR.
SUMMARY OF THE PREFERRED EMBODIMENTS
To overcome the limitations in the prior art described above, preferred embodiments disclose a system for caching data. After determining a sequential access of a first memory area, a processing unit stages a group of data sets from the first memory area to a second memory. The processing unit processes a data access request (DAR) for data sets in the first memory area that are included in the sequential access and reads the requested data sets from the second memory area. The processing unit determines a trigger data set from a plurality of trigger data sets based on a trigger data set criteria. The processing unit then stages a next group of data sets from the first memory area to the second memory area in response to reading the determined trigger data set.
In further embodiments, the processing unit receives a data access request (DAR), information indicating a range of data to be accessed, and a first data set number indicating a first data set of the data sets. The processing unit stages a group of data sets from the first memory area to a second memory area in response to processing the information. The processing unit then processes a data access request (DAR) for data sets in the first memory area that are in the range. Upon reading a data set indicated as a trigger data set, the processing unit stages a next group of data sets from the first memory area to the second memory area in response to reading the trigger data set.
In still further embodiments, the number of data sets in the group of data sets may be adjusted. The number of data sets in the group is decreased upon determining that a number of staged data sets demoted from the second memory area that have not been subject to a DAR exceed a first predetermined threshold. The number of data sets in the group is increased upon determining that a number of times requested data in a processed DAR are not staged to the second memory area exceeds a second predetermined threshold.
Preferred embodiments seek to balance two competing goals. The first goal concerns providing sufficient tracks in cache to continuously make data available to DARs from host systems and avoid a cache miss. A cache miss occurs when the cache does not include the data requested by the host. The second goal concerns conserving available memory space in cache. To accomplish this second goal, too much data cannot be staged into cache in advance of when the DAR will be received. Otherwise, the staged data will remain unaccessed in cache and consume cache space which could otherwise be made available for other operations.
REFERENCES:
patent: 4437155 (1984-03-01), Sawyer et al.
patent: 4458316 (1984-07-01), Fry et al.
patent: 4467421 (1984-08-01), White
patent: 4468730 (1984-08-01), Dodd et al.
patent: 4489378 (1984-12-01), Dixon et al.
patent: 4490782 (1984-12-01), Dixon et al.
patent: 4533995 (1985-08-01), Christian et al.
patent: 4571674 (1986-02-01), Hartung
patent: 4574346 (1986-03-01), Hartung
patent: 4583166 (1986-04-01), Hartung et al.
patent: 4603382 (1986-07-01), Cole et al.
patent: 4636946 (1987-01-01), Hartung et al.
patent: 4875155 (1989-10-01), Iskiyan et al.
patent: 4882642 (1989-11-01), Tayler et al.
patent: 4956803 (1990-09-01), Tayler et al.
patent: 4979108 (1990-12-01), Crabbe, Jr..
patent: 5134563 (1992-07-01), Tayler et al.
patent: 5263145 (1993-11-01), Brady et al.
patent: 5297265 (1994-03-01), Frank et al.
patent: 5426761 (1995-06-01), Cord et al.
patent: 5432919 (1995-07-01), Falcone et al.
patent: 5432932 (1995-07-01), Chen et al.
patent: 5434992 (1995-07-01), Mattson
patent: 5440686 (1995-08-01), Dahman et al.
patent: 5440727 (1995-08-01), Bhide et al.
patent: 5446871 (1995-08-01), Shomler et al.
patent: 5481691 (1996-01-01), Day, III et al.
patent: 5504861 (1996-0
Beardsley Brent Cameron
Benhase Michael Thomas
Hyde Joseph Smith
Jarvis Thomas Charles
Martin Douglas A.
International Business Machines - Corporation
Kim Matthew
Konrad Raynes Victor & Mann
Peugh B. R.
Victor David W.
LandOfFree
Method and system for staging data into cache does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Method and system for staging data into cache, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and system for staging data into cache will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2912682