Leaky cache mechanism

Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S133000, C711S135000, C711S136000

Reexamination Certificate

active

06728835

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention generally relates to methods and apparatus for controlling a level two cache memory by multiple users and more particularly relates to controlling flushing of the level two cache memory.
2. Description of the Prior Art
It is known in the prior art to develop computer systems having cache memory(s) built into the basic architecture. The two fundamental characteristics of any memory unit are capacity (i.e., number of storage cells) and speed. The cost of a memory unit is, of course, increased with increased capacity and/or increased speed. Because of the time delays necessitated by increased size, memory systems which are both very large in capacity and very fast tend to be cost prohibitive.
Therefore, for virtually all general purpose computers, cost requirements dictate that the main storage subsystem will operate more slowly than the processor(s) which it serves. Therefore, there tends to be a constant mismatch between the rate at which data is to be accessed from the main storage subsystem and the rate at which that data is processed. Thus, a constant performance issue with computer design is related to reduction of the latencies associated with the wait between a processor request for memory access and the time when that request is actually honored by the main storage subsystem.
A common technique for matching a relatively high speed processor to a relatively low speed main storage subsystem is to interpose a cache memory in the interface. The cache memory is much faster but of much smaller capacity than the main storage subsystem. Data requested by the processor is stored temporarily in the cache memory. To the extent that the same data remains within the cache memory to be utilized more than once by the processor, substantial access time is saved by supplying the data from the cache memory rather than from the main storage subsystem. Further savings are realized by loading the cache memory with blocks of data located near the requested data under the assumption that other data will be soon needed from the loaded block.
There are additional issues to be considered with regard to cache memory design. Program instruction data, for example, tends to be quite sequential and involves only read accesses. However, operand data may involve both read and write accesses. Therefore, it is helpful to optimize cache memory design by dividing instruction processor cache memories into program instruction and operand portions.
Furthermore, if a computer system contains multiple processing units, provision must be made to ensure that data locations accessed by a first processing unit are provided as potentially modified by write operations from a second processor unit. This data coherency problem is usually solved via the use of store-through (i.e., write operands cause immediate transfer to main storage) or store-in (i.e., cache memory contains only updated data and flags are needed to show that main storage location contains obsolete data).
As the use of cache memory has become more common, it is now known to utilize multiple levels of cache memory within a single system. U.S. Pat. No. 5,603,005, issued to Bauman et al. on Feb. 11, 1997, incorporated herein by reference, contains a description of a system with three levels of cache memory. In the multiprocessor Bauman et al. system, each instruction processor has dedicated instruction (i.e., read-only) and operand (i.e., write-through) cache memories. This corresponds to level one cache memory.
A level two cache memory is located within each system controller. The level two cache memory of Bauman et al. is a store-in cache memory which is shared by all of the processors coupled to corresponding system controller. The system of Bauman et al. contains a level three cache which is coupled between each of the system controllers and a corresponding main memory unit.
It is axiomatic that the capacity of a cache memory is less than that of main storage. Therefore, after a period of time, a cache memory typically fills up completely necessitating a flushing of some of its contents before any new data may be added to the cache memory. For a store-in level two cache memory, such as taught by Bauman et al., data modified by input data from an input/output processor or an operand write from an instruction processor must be stored within the level three cache memory and/or main storage, because it is the most current data.
A primary key to efficiency within a cache memory architecture is the process whereby some of the data within a cache memory is chosen to be flushed to accommodate newly requested data. This is particularly important for the level two, store-in cache memory of Bauman et al., because the flushing process necessitates writing the changed data to the level three cache memory.
The most common technique known in the prior art for choosing which data to flush is called least recently used (LRU). This approach is based upon a determination of which data has been latent within the cache memory for the longest period of time without being utilized for processing. U.S. Pat. No. 5,625,793, issued to Mirza on Apr. 29, 1997, suggests a change in the LRU technique. Yet, these prior art approaches are suboptimal, thus reducing efficiency of the cache memory architecture.
SUMMARY OF THE INVENTION
The present invention overcomes many of the disadvantages associated with the prior art by providing a method of and apparatus for improving upon the least recently used algorithm for flushing of a level two cache memory. Though the least recently used algorithm may work just fine for many situations, there are circumstances wherein there is specific a priori knowledge that requested data will or will not be used again in the near future.
In accordance with the present invention, this a priori knowledge may be utilized to enhance the basic LRU determined flush activity of the level two cache memory. If it is known that the requested data is highly likely to be used again soon, the existing LRU algorithm ensures that the data will not be prematurely flushed from the level two cache memory. However, if it is known that reuse is highly unlikely, under normal operation of the LRU, the data would be aged within the level two cache, potentially causing more frequently used data to be flushed. Therefore, in accordance with the present invention, provisions are made to quickly dispose of the little used data quantity.
In accordance with the preferred mode of the present invention, certain write instructions are included within the processor repertoire which send a “release ownership” line to the system controller. This signal indicates that the data need not be maintained within the store-in level two cache. A similar instruction is the “read, no replace” instruction for read accesses.
With either case (i.e., release ownership write or read, no replace), the system controller is instructed not to unnecessarily maintain the accessed data with the level two cache memory. This results in one of two situations.
If there is a hit within the level two cache memory, the data access is made there, but the element is tagged as least (rather than most) recently used. That means that the system controller will flush that data element as soon as additional space is needed.
When the access request results in a cache miss, the request is made of the level three cache, as usual. However, as soon as the data is provided to the requesting instruction processor, the data element is flushed for a write request or not replaced in the cache for a read request.
Thus, in accordance with the present invention, the efficiency of the level two cache is improved, because knowingly unneeded data is not maintained within the limited storage at the expense of potentially more needed data. This means that, on the average, the more needed data will remain within the level two cache for longer periods of time.


REFERENCES:
patent: 4928239 (1990-05-01), Baum et al.
patent: 5353425 (1994-10-01), Malamy et al.
patent: 560

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Leaky cache mechanism does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Leaky cache mechanism, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Leaky cache mechanism will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3248842

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.