Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories
Reexamination Certificate
1999-09-07
2002-07-23
Kim, Matthew (Department: 2186)
Electrical computers and digital processing systems: memory
Storage accessing and control
Hierarchical memories
C711S128000, C711S135000
Reexamination Certificate
active
06425058
ABSTRACT:
BACKGROUND OF THE INVENTION
1. Technical Field
The present invention relates in general to data processing and, in particular, to cache memory in a data processing system. Still more particularly, the present invention relates to a set associative cache in a data processing system that stores information in particular sets according to information type.
2. Description of the Related Art
A cache is a small amount of expensive high-speed memory, which is commonly utilized within a data processing system to improve a processor's access time to data stored within an associated memory, thereby decreasing access latency. A cache typically comprises a number of cache lines, which each include several bytes of data. Data stored within memory is mapped into a cache utilizing an index portion of the memory addresses associated with the data, such that multiple memory addresses having the same index portion map to the same cache line. Cached data associated with a particular memory address are distinguished from data associated with other addresses having the same index portion by an address tag, typically the high order address bits, which is stored in association with the cached data. In order to minimize the conflict between data associated with addresses having identical index portions, many data processing system caches are implemented as set associative caches, which include a number of congruence classes that each contain multiple sets (storage locations) for storing cache lines.
When data requested by the processor does not reside within a set associative cache, a cache miss occurs, and the requested data are fetched from a lower level cache or memory. In order to accommodate the requested data within the cache, data resident within one of the sets of the congruence class to which the requested data maps often must be replaced or “cast out.” The replaced set is typically selected utilizing a single predetermined victim selection algorithm, such as a least recently used (LRU) or most recently used (MRU) algorithm, that is believed, on average, to retain in the cache data having the highest probability of being requested by the processor.
The present invention recognizes that a significant drawback of conventional cache architectures is that they apply uniform allocation and victim selection policies (and other cache policies) to all types of data regardless of the persistence (or other characteristics) of the data. For example, while an LRU victim selection algorithm may be optimal for application data, other types of data stored within the same congruence class, for example, program instructions or address translation table entries, may have differing persistence and may therefore be more efficiently managed utilizing a different victim selection policy.
SUMMARY OF THE INVENTION
The present invention addresses the above-noted shortcomings of prior art cache architectures by introducing a set associative cache that implements data type-dependent policies, and in particular, data-dependent allocation and victim selection policies.
A set associative cache in accordance with the present invention includes a cache controller, a directory, and an array including at least one congruence class containing a plurality of sets. The plurality of sets are partitioned into multiple groups according to which of a plurality of information types each set can store. The sets are partitioned so that at least two of the groups include the same set and at least one of the sets can store fewer than all of the information types. The cache controller then implements different cache policies for at least two of the plurality of groups, thus permitting the operation of the cache to be individually optimized for different information types.
All objects, features, and advantages of the present invention will become apparent in the following detailed written description.
REFERENCES:
patent: 5210843 (1993-05-01), Ayers
patent: 5651135 (1997-07-01), Hatakeyama
patent: 5717893 (1998-02-01), Mattson
patent: 5751990 (1998-05-01), Krolak et al.
patent: 5915262 (1999-06-01), Bridgers et al.
patent: 6014728 (2000-01-01), Baror
patent: 6032227 (2000-02-01), Shaheen et al.
patent: 6044478 (2000-03-01), Green
patent: 6047358 (2000-04-01), Jacobs
patent: 6058456 (2000-05-01), Arimilli et al.
patent: 6148368 (2000-11-01), DeKoning
patent: 6260114 (2001-07-01), Schug
patent: 6272598 (2001-08-01), Arlitt et al.
Arimilli Lakshminarayana Baba
Arimilli Ravi Kumar
Fields, Jr. James Stephen
Bataille Pierre-Michel
Bracewell & Patterson L.L.P.
International Business Machines - Corporation
Kim Matthew
Salys Casimer K.
LandOfFree
Cache management mechanism to enable information-type... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Cache management mechanism to enable information-type..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Cache management mechanism to enable information-type... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2840350