Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories
Reexamination Certificate
1999-07-30
2002-05-28
Elmore, Reba I. (Department: 2187)
Electrical computers and digital processing systems: memory
Storage accessing and control
Hierarchical memories
C711S128000
Reexamination Certificate
active
06397298
ABSTRACT:
BACKGROUND OF THE INVENTION
1. Technical Field
The present invention relates to a data processing system in general, and in particular to a data processing system that utilizes a cache memory. Still more particularly, the present invention relates to a data processing system that utilizes a cache memory having a programmable cache replacement scheme.
2. Description of the Prior Art
Typically, a data processing system comprises a central processing unit (CPU), a main memory, and an input/output device. For some time, the speed at which the CPU can decode and execute instructions has far exceeded the speed at which instructions can be transferred from the main memory to the CPU. In an attempt to reduce this disparity, a cache memory is interposed between a CPU and a main memory in many data processing systems. A cache memory is a small, high-speed memory that is used to temporarily hold information, such as data and/or instruction, that is likely be used in the near future by the CPU.
A cache memory contains many cache lines in which information is stored. Each cache line has an address tag that uniquely identifies which block of a main memory it stores. Each time the CPU references a memory, the cache memory compares the reference address with address tags stored within to determine if the cache memory holds a copy of the requested information. If the cache memory has a copy of the requested information, the cache memory supplies the requested information to the CPU; otherwise, the requested information is retrieved from the main memory. Because information located within a cache memory may be accessed in much less time than that located in a main memory, a CPU having a cache memory spends far less time waiting for information to be fetched and/or stored.
Earlier cache memory designs were typically fully-associative, meaning all elements within a cache memory are searched associatively for each request from the CPU. However, large fully-associative cache memories are very expensive and relatively slow. Thus, in order to provide an access time acceptable for use with a CPU, the size of a full-associative cache memory is necessarily limited, which yields a rather low hit ratio. More recently, cache memories have been organized into groups of smaller associative memories called sets, and those cache memories are known as set-associative cache memories. For a cache memory having L cache lines, divided into s sets, there are L/s cache lines in each set. When an address in the main memory is mapped into the cache memory, the address can appear in any of the s sets. For a cache memory of a given size, searching each of the sets in parallel can improve access time by a factor of s. Nevertheless, the time to complete the required search is still undesirably lengthy.
The operation of cache memories to date has been based upon the assumption that, because a particular memory location has been referenced, those locations very close to it are very likely to be accessed in the near future. This is often referred to as the property of locality. The property of locality has two aspects, namely, temporal and spatial. Temporal locality (or property of locality by time) means that the information that will be in use in the near future is likely to be in use already. This type of behavior can be expected from certain data structures, such as program loops, in which both data and instructions are reused. Spatial locality (or property of locality by space) means that portions of the address space that are in use generally consist of a fairly small number of contiguous segments of that address space. In other words, the loci of reference of the program in the near future are likely to be near the current loci of reference. This type of behavior can be expected from common knowledge of program structure: related data items (variables, arrays) are usually stored together, and instruction are mostly executed sequentially. Because the cache memory retains segments of information that have been recently used, the property of locality implies that certain requested information is also likely to be found in the cache memory.
It is quite apparent that the larger the cache memory, the higher the probability of finding the requested information in the cache memory. Cache sizes cannot be expanded without limit, however, for reasons such as cost and access time. Thus, when a cache “miss” occurs, a decision must be made as to what information should be swapped out to make room for the new information being retrieved from a main memory via a process known as cast-out. Various cache replacement schemes can be utilized to decide what information should be cast-out after a cache “miss.” Among those cache replacement schemes that are well-known in the art, the most commonly utilized replacement scheme is the Least-Recently Used (LRU) replacement scheme. According to the LRU replacement scheme, a cache memory maintains several status bits that track the access order of each cache line. Each time a cache line is accessed, the status bits of the accessed cache line is marked most recently used, and the status bits of the other cache lines are adjusted accordingly. When a cache “miss” occurs, the information of the LRU cache line is cast-out to make room for the requested information being retrieved from the main memory. Other cache replacement schemes that are also widely used are First-In-First-Out (FIFO) and random replacement, the nomenclature of each being self-explanatory.
Contrary to the above-stated assumption, however, not all computer data structures have the same kind of data locality. For some simple structures such as data stacks or sequential data, the above-mentioned LRU replacement scheme is not optimal. Thus, in prior art cache memory structures and in accordance with the basic assumption that the most likely data to be referenced is that which were referenced most recently or are close to that data in physical address, no provision has been made in cache memory operation for deviation from the standard cache replacement schemes mentioned above. Consequently, it would be desirable to provide a cache memory having a more flexible cache replacement scheme.
SUMMARY OF THE INVENTION
In accordance with a preferred embodiment of the present invention, a linefill operation is first performed on a cache line after a cache “miss.” After the linefill operation, the cache line can be assigned to any access status, but preferably not the most recently used status. The assignment of the access status is based on a programmable setting that defines an access status after a linefill operation and all other subsequent accesses.
All objects, features, and advantages of the present invention will become apparent in the following detailed written description.
REFERENCES:
patent: 4928239 (1990-05-01), Baum et al.
patent: 5253351 (1993-10-01), Yamamoto et al.
patent: 5752261 (1998-05-01), Cochcroft, Jr.
patent: 5765191 (1998-06-01), Loper et al.
patent: 5787478 (1998-07-01), Hicks et al.
patent: 6240489 (2001-05-01), Durham et al.
Arimilli Ravi Kumar
Dodson John Steven
Guthrie Guy Lynn
Bracewell & Patterson L.L.P.
Elmore Reba I.
International Business Machines - Corporation
Salys Casimer K.
LandOfFree
Cache memory having a programmable cache replacement scheme does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Cache memory having a programmable cache replacement scheme, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Cache memory having a programmable cache replacement scheme will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2868371