Caching method and apparatus

Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S143000, C711S145000, C711S159000

Reexamination Certificate

active

06415357

ABSTRACT:

FIELD OF THE INVENTION
The present invention is directed to computer systems, and more particularly, to data caching methods and apparatus for improved performance in such systems.
BACKGROUND
A multiprocessor computer system, by definition, comprises a plurality of instruction processors. Each instruction processor typically has access to the main memory of the computer system. In many multiprocessor computer systems, each instruction processor has its own local cache memory for storing frequently accessed data from the main memory. When a given processor accesses data from an address in main memory, a copy of the retrieved data is stored locally in its cache memory. On subsequent requests for data at the same address, the data can be read out of the local cache memory, rather than having to access the main memory. Accessing data from a local cache memory is much faster than accessing data from main memory, and thus the use of cache memories in a multiprocessor computer system typically improves the performance of the system.
Some multiprocessor computer systems allow copies of data from the main memory to be stored in the local cache of a given processor in either a read-only form (i.e., the processor is not permitted to modify the data in its cache) or a writeable form (meaning the processor is permitted to modify the data in its cache). A writeable copy of data stored in a local cache is referred to herein as “original” data. A read-only copy of data stored in a local cache is referred to as a “copy”. In other nomenclature, a processor that holds data in its cache in a writeable form (i.e., “original” data) is sometimes referred to as holding that data in an “exclusive” state (or as having “exclusive” access rights to that data), and a processor that holds data in its cache in a read-only form (i.e., a “copy”) is sometimes referred to as holding that data in a “shared” state (or as having “shared” access rights in that data).
When a processor attempts to fetch data from an address in main memory and that data is currently stored as “original” data in the local cache of another processor, that other processor must “return” the original data to main memory (or pass it directly to the requesting processor) so that it can be accessed by the requesting processor. This is commonly referred to as a “return” operation. As the number of processors in a multiprocessor computer system increases, the inventors have discovered that the overhead associated with the movement of original data between processors (e.g., as a result of numerous “return” operations) results in a larger than expected performance degradation. The present invention addresses this problem.
SUMMARY OF THE INVENTION
The present invention is directed to a method and apparatus for use in a computer system having a main memory, a processor that issues addresses to the main memory to retrieve data stored at those addresses, and a cache in which copies of data retrieved by the processor are temporarily stored. According to the method of the present invention, each time original data (i.e., a modifiable copy of data) is fetched from the main memory at a particular address and a copy of the data is stored in the cache, the address of the copy of the data is stored in a queue having a predetermined depth. Once the depth of the queue is reached, the storage of each new address in the queue causes a previously stored address to be output from the queue. For each address output from the queue, the cache returns the corresponding data to the main memory. Use of the queue in accordance with this method effectively places a limit on the amount of original data stored in the cache. Preferably, the queue comprises a first-in, first-out queue. Also, the depth of the queue (i.e., the number of individual address entries in the queue) preferably is programmable, providing flexibility in establishing the limit on the amount of original data in the cache.
Apparatus according to the present invention, for use in a computer system having a main memory, a processor that issues addresses to the main memory to retrieve data stored at those addresses, and a cache in which copies of data retrieved by the processor are temporarily stored, comprises a queue that stores the address of each copy of data stored in the cache. The queue has a depth whereby once the depth is reached, the addresses are successively read out of the queue. The cache then returns to the main memory, for each address read out of the queue, the corresponding data stored in the cache. As mentioned above, queue preferably comprises a first-in, first-out queue, and the depth of the queue is preferably programmable.
While the present invention can be used in single processor computer systems, the invention is particularly useful in larger multiprocessor computer systems, where the overhead associated with “return” requests, which result for example when a task running on one processor requests access to original data held in the cache of another processor, can lead to significant performance degradation in the system. In such systems, the present invention can be used to limit the amount of original data held in the caches of each processor, thereby reducing the overall number of “return” requests issued to the processor caches. This minimizes the previously discovered performance degradation.
Additional features and advantages of the present invention will become evident hereinafter.


REFERENCES:
patent: 4159532 (1979-06-01), Getson, Jr. et al.
patent: 4774687 (1988-09-01), Taniguchi et al.
patent: 4802086 (1989-01-01), Gay et al.
patent: 5023776 (1991-06-01), Gregor
patent: 5117493 (1992-05-01), Jensen
patent: 5155825 (1992-10-01), Moughanni et al.
patent: 5317720 (1994-05-01), Stamm et al.
patent: 5333296 (1994-07-01), Bouchard et al.
patent: 5353426 (1994-10-01), Patel et al.
patent: 5363486 (1994-11-01), Olson et al.
patent: 5404482 (1995-04-01), Stamm et al.
patent: 5404483 (1995-04-01), Stamm et al.
patent: 5506967 (1996-04-01), Barajas et al.
patent: 5539895 (1996-07-01), Bishop et al.
patent: 5544340 (1996-08-01), Doi et al.
patent: 5561779 (1996-10-01), Jackson et al.
patent: 5579503 (1996-11-01), Osborne
patent: 5590379 (1996-12-01), Hassler et al.
patent: 5664145 (1997-09-01), Apperley et al.
patent: 5666482 (1997-09-01), McClure
patent: 5687348 (1997-11-01), Whittaker
patent: 5708837 (1998-01-01), Handlogten
patent: 5715425 (1998-02-01), Goldman et al.
patent: 5831640 (1998-11-01), Wang et al.
patent: 5845324 (1998-12-01), White et al.
patent: 6061765 (2000-05-01), Van Doren et al.
patent: 6105108 (2000-08-01), Steely et al.
patent: 6366984 (2002-04-01), Carmean et al.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Caching method and apparatus does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Caching method and apparatus, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Caching method and apparatus will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2837661

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.