Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories
Reexamination Certificate
1997-09-30
2002-08-13
Kim, Hong (Department: 2187)
Electrical computers and digital processing systems: memory
Storage accessing and control
Hierarchical memories
C711S128000, C711S133000, C711S136000, C711S003000
Reexamination Certificate
active
06434671
ABSTRACT:
FIELD OF THE INVENTION
The present invention relates to the field of data storage, and more particularly to a method and apparatus for storing data in a cache memory.
BACKGROUND OF THE INVENTION
Cache memories are relatively small, high-speed memories used to reduce memory access time. Cache memories exploit two characteristics of memory access to reduce access time: temporal locality, the tendency of computer programs to repeatedly access the same memory locations; and spatial locality, the tendency of computer programs to access memory locations that are close to one another.
In order to exploit temporal and spatial locality, data from frequently accessed regions of system memory are stored in cache memory. That way, subsequent accesses to the cached memory regions will not incur the full system memory access time, but the shorter cache access time instead. A memory transaction that accesses cache memory instead of main memory is called a cache hit, and the cache “hit-rate” is a fundamental metric of cache operation.
Several techniques have been employed to increase cache hit-rates. For example, to further exploit spatial locality, caches have been designed with increasingly larger row sizes. The size of a cache row (also called a cache line) defines the quantum of data stored in a cache memory after a cache miss. As the row size increases, it becomes more likely that subsequent memory accesses will address data in the row, thus improving the cache hit-rate.
Temporal locality is exploited to improve cache hit-rate by providing multiple storage elements that are addressed by the same cache index. The storage elements are commonly referred to as “ways” and a cache memory that has multiple ways is called a “multiple-way, set-associative cache”. The idea behind multiple-way cache memories is to allow more than one system memory address to correspond to each cache index. Because the cache index is a sub-field of the overall system memory address, multiple-way design avoids repeated cache misses that occur in single-way designs when different addresses having the same cache index are accessed in succession. In single-way or “direct-mapped” cache designs, successive accesses at memory locations having the same cache index result in a sequence of cache miss/cache update operations. This phenomenon is referred to as “thrashing” because data is rapidly swapped into and out of the cache, and much of the benefit of the cache memory is lost.
Despite the advantages of multiple-way, set-associative cache memories, a significant amount of thrashing still occurs when the processor switches between tasks or functions that have dislocated code and data spaces. For example, if, while executing a first task having program code located within a given region of system memory, the processor switches to a second task having program code located within a different region of system memory, it is likely that program code for the first task will be swapped out of the cache in favor of program code for the second task. Consequently, as the processor continues to switch between the first and second tasks, significant number of cache misses occur, thus lowering the average cache hit-rate.
Similarly, when a single task alternately processes data stored in two different regions in memory (e.g., an audio data store and a video data store in a multi-media application), cache thrashing tends to occur as the task alternates between processing the two different data stores.
SUMMARY OF THE INVENTION
A method and apparatus for compartmentalizing a cache memory are disclosed. A cache memory having a plurality of storage compartments receives one or more cache compartment signals at one or more inputs. The cache compartment signals are from a source external to the cache memory. Based on the one or more cache compartment signals, cache compartment logic selects one of the plurality of storage compartments to store data after a cache miss.
REFERENCES:
patent: 4195342 (1980-03-01), Joyce et al.
patent: 4315312 (1982-02-01), Schmidt
patent: 4853846 (1989-08-01), Johnson et al.
patent: 5014195 (1991-05-01), Farrell et al.
patent: 5226147 (1993-07-01), Fujishima et al.
patent: 5517633 (1996-05-01), Ohta et al.
patent: 5553262 (1996-09-01), Ishida et al.
patent: 5724547 (1998-03-01), Iyengar et al.
patent: 5732242 (1998-03-01), Mowry
patent: 5761715 (1998-06-01), Takahashi
patent: 5778428 (1998-07-01), Batson et al.
patent: 5831889 (1998-11-01), Higaki
patent: 5835948 (1998-11-01), Olarig et al.
patent: 5946710 (1999-08-01), Bauman et al.
patent: 6026470 (2000-02-01), Arimilli et al.
“Computer Organization & Design The Hardware/Software Interface”, David Patterson, et al., 1994, pp. 457-481 & Table of Contents.
Blakely , Sokoloff, Taylor & Zafman LLP
Intel Corporation
Kim Hong
LandOfFree
Software-controlled cache memory compartmentalization does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Software-controlled cache memory compartmentalization, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Software-controlled cache memory compartmentalization will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2913105