Method and system for managing data in cache

Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S133000, C711S113000

Reexamination Certificate

active

06327644

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to a method and system for caching data and, in particular, for caching data associated with different access operations, such as sequential and non-sequential data access operations.
2. Description of the Related Art
Data processing systems use a high-speed managed buffer memory, otherwise known as cache, to store frequently used data that is regularly maintained in a relatively slower memory device. For instance, a cache can be a RAM that buffers frequently used data regularly stored in a hard disk drive or a direct access storage device (DASD). After a track is read from the DASD, the track will be cached in RAM and available for subsequent data access requests (DARs). In this way, a storage controller processing read requests can avoid the mechanical delays of having to physically access and read data from the DASD. Cache can also be a high speed memory to a microprocessor to store data and instructions used by the microprocessor that are regularly maintained in RAM. Processor cache would buffer data from a volatile memory device, such as DRAM or RAM.
Often data in cache is managed according to a least recently used (LRU) replacement algorithm in which the least recently used data is demoted from the cache to make room for new data. A first-in-first-out (FIFO) algorithm may also be used. The LRU replacement algorithm works by organizing the data in the cache in a linked list of data entries which is sorted according to the length of time since the most recent reference to each data entry. The most recently used (MRU) data is at one end of the linked list, while the least recently used (LRU) data is at the other. Data that is accessed from the linked list or added for the first time is placed at the MRU end. When data is demoted to accommodate the addition of new data, the demoted data is removed from the LRU end.
Data can be accessed sequentiually or non-sequentially. In the non-sequential access mode, data records are randomly requested. Such non-sequential accesses often occur when an application needs a particular record or data sets. Sequential data access occurs when numerous adjacent tracks are accessed, such as for a data backup operation or to generate a large report. For instance, a disk backup usually creates one long sequential reference to the entire disk, thus, flooding the cache with data. One problem with LRU schemes is that if a sequential data access floods the cache when placed at the MRU end, then other non-sequential records are demoted and removed from cache to accommodate the large sequential data access. Once the non-sequential data is demoted from cache, a data access request (DAR) for the demoted data must be handled by physically accessing the data from the slower memory device.
One goal of cache management algorithms is to maintain reasonable “hit ratios” for a given cache size. A “hit” is a DAR that was returned from cache, whereas a “miss” occurs when the requested data is not in cache and must be retrieved from DASD. A “hit ratio” is empirically determined from the number of hits divided by the total number of DARs, both hits and misses. System performance is often determined by the hit ratio. A system with a low hit ratio may cause delays to application program processing while requested data is retrieved from DASD.
A low hit ratio indicates that the data often was not in cache and had to be retrieved from DASD. Low hit ratios may occur if non-sequentially accessed data is “pushed” out of the cache to make room for a long series of sequentially accessed data. The higher probability of subsequent DARs toward non-sequentially accessed data further lowers the hit ratio because non-sequentially accessed data has a greater likelihood of being accessed. Moreover, the non-sequentially accessed data is “pushed out” of cache to make room for sequentially accessed data that has a lower likelihood of being accessed.
In certain systems, sequential data is placed at the LRU end and non-sequential data at the MRU end. Such methodologies often have the effect of providing an unreasonably low hit ratio for sequentially accessed data because the sequentially accessed data has some probability of being accessed (although usually less than non-sequentially accessed data). Algorithms that place sequentially accessed data at the LRU end cause the sequential data to be demoted very quickly, thus providing a relatively low hit ratio.
SUMMARY OF THE PREFERRED EMBODIMENTS
To overcome the limitations in the prior art described above, preferred embodiments disclose a method for caching data. A list of data entries in a first memory area has a first end and a second end. A first pointer addresses a data entry in the list and a second pointer addresses another data entry in the list that is not at the first and second ends. Data from a second memory area is provided to add to the list. A determination is made as to whether the provided data to add to the list is one of a first type and second type of data. The provided data is stored in the first memory area as a new data entry in the list. The first pointer is modified to address the new data entry after determining that the provided data is of the first type. After determining that the provided data is of the second type, the second pointer is processed to determine where to add the new data entry to the list between the first and second ends.
In further embodiments, the data entries in the list are ordered hierarchically from the first end to the second end. The first pointer addresses a data entry at the first end and the second pointer addresses a data entry in substantially a middle of the list.
In yet further embodiments, the first type of data comprises data associated with a non-sequential data request and the second type of data comprises data associated with a series of sequential data requests.
Preferred embodiments provide a method and system for adding data of one type to one end of an LRU list and data of another type to another location in the list, such as the middle. This provides for a caching scheme that prevents certain data, such as sequentially accessed data, that is accessed in large continuous segments, from overloading cached data of a second type, such as non-sequential data. By caching different data types at different locations in the list, data that tends to dominate cache, such as sequential data, will not push out less frequently accessed data, such as non-sequential data. Preferred embodiments, thus, provide an algorithm to control the placement of data in a cache according to the data access needs of the system.


REFERENCES:
patent: 4437155 (1984-03-01), Sawyer et al.
patent: 4458316 (1984-07-01), Fry et al.
patent: 4463424 (1984-07-01), Mattson et al.
patent: 4468730 (1984-08-01), Dodd et al.
patent: 4489378 (1984-12-01), Dixon et al.
patent: 4490782 (1984-12-01), Dixon et al.
patent: 4533995 (1985-08-01), Christian et al.
patent: 4583166 (1986-04-01), Hartung et al.
patent: 4603382 (1986-07-01), Cole et al.
patent: 4636946 (1987-01-01), Hartung et al.
patent: 4875155 (1989-10-01), Iskiyan et al.
patent: 4882642 (1989-11-01), Tayler et al.
patent: 4956803 (1990-09-01), Tayler et al.
patent: 4979108 (1990-12-01), Crabbe, Jr.
patent: 5043885 (1991-08-01), Robinson
patent: 5134563 (1992-07-01), Tayler et al.
patent: 5263145 (1993-11-01), Brady et al.
patent: 5293609 (1994-03-01), Shih et al.
patent: 5297265 (1994-03-01), Frank et al.
patent: 5426761 (1995-06-01), Cord et al.
patent: 5432919 (1995-07-01), Falcone et al.
patent: 5432932 (1995-07-01), Chen et al.
patent: 5434992 (1995-07-01), Mattson
patent: 5440686 (1995-08-01), Dahman et al.
patent: 5440727 (1995-08-01), Bhide et al.
patent: 5446871 (1995-08-01), Shomler et al.
patent: 5481691 (1996-01-01), Day, III et al.
patent: 5504861 (1996-04-01), Crockett et al.
patent: 5526511 (1996-06-01), Swenson et al.
patent: 5551003 (1996-08-01), Mattson et al.
patent: 5574950 (1996-11-01), Hathorn et al.
patent: 5590308 (1996-12-01), Shih
patent: 5592618 (1997

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Method and system for managing data in cache does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Method and system for managing data in cache, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and system for managing data in cache will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2574257

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.