Method of cache management to dynamically update...

Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S129000, C711S133000, C711S145000

Reexamination Certificate

active

06434669

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Technical Field
The present invention relates in general to data processing and, in particular, to cache memory in a data processing system. Still more particularly, the present invention relates to a set associative cache in a data processing system that stores information in particular sets according to information type.
2. Description of the Related Art
A cache is a small amount of expensive high-speed memory, which is commonly utilized within a data processing system to improve a processor's access time to data stored within an associated memory, thereby decreasing access latency. A cache typically comprises a number of cache lines, which each include several bytes of data. Data stored within memory is mapped into a cache utilizing an index portion of the memory addresses associated with the data, such that multiple memory addresses having the same index portion map to the same cache line. Cached data associated with a particular memory address are distinguished from data associated with other addresses having the same index portion by an address tag, typically the high order address bits, which is stored in association with the cached data. In order to minimize the conflict between data associated with addresses having identical index portions, many data processing system caches are implemented as set associative caches, which include a number of congruence classes that each contain multiple sets (storage locations) for storing cache lines.
When data requested by the processor does not reside within a set associative cache, a cache miss occurs, and the requested data are fetched from a lower level cache or memory. In order to accommodate the requested data within the cache, data resident within one of the sets of the congruence class to which the requested data maps often must be replaced or “cast out.” The replaced set is typically selected utilizing a single predetermined victim selection algorithm, such as a least recently used (LRU) or most recently used (MRU) algorithm, that is believed, on average, to retain in the cache data having the highest probability of being requested by the processor.
The present invention recognizes that a significant drawback of conventional cache architectures is that they apply uniform allocation and victim selection policies (and other cache policies) to all types of data regardless of the persistence (or other characteristics) of the data. For example, while an LRU victim selection algorithm may be optimal for application data, other types of data stored within the same congruence class, for example, program instructions or address translation table entries, may have differing persistence and may therefore be more efficiently managed utilizing a different victim selection policy.
SUMMARY OF THE INVENTION
The present invention addresses the above-noted shortcomings of prior art cache architectures by introducing a set associative cache that implements information type-dependent policies, and in particular, information type-dependent allocation and victim selection policies.
A set associative cache in accordance with the present invention includes a cache controller, a directory, and an array including at least one congruence class containing a plurality of sets. The plurality of sets are partitioned into multiple groups according to which of a plurality of information types each set can store. The sets are partitioned so that at least two of the groups include the same set and at least one of the sets can store fewer than all of the information types. To optimize cache operation, the cache controller dynamically modifies a cache policy of a first group while retaining a cache policy of a second group, thus permitting the operation of the cache to be individually optimized for different information types. The dynamic modification of cache policy can be performed in response to either a hardware-generated or software-generated input.
All objects, features, and advantages of the present invention will become apparent in the following detailed written description.


REFERENCES:
patent: 5210843 (1993-05-01), Ayers
patent: 5651135 (1997-07-01), Hatakeyama
patent: 5717893 (1998-02-01), Mattson
patent: 5751990 (1998-05-01), Krolak et al.
patent: 6014728 (2000-01-01), Baror
patent: 6032227 (2000-02-01), Shaheen et al.
patent: 6044478 (2000-03-01), Green
patent: 6047358 (2000-04-01), Jacobs
patent: 6058456 (2000-05-01), Armilli et al.
patent: 6148368 (2000-11-01), DeKoning
patent: 6260114 (2001-07-01), Schug
patent: 6272598 (2001-08-01), Arlittet

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Method of cache management to dynamically update... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Method of cache management to dynamically update..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method of cache management to dynamically update... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2969127

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.