Method and apparatus for accessing a cache memory...

Electrical computers and digital processing systems: memory – Address formation – Generating prefetch – look-ahead – jump – or predictive address

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

Reexamination Certificate

active

06189083

ABSTRACT:

BACKGROUND OF THE INVENTION
This invention relates generally to computer systems and more specifically to the management of cache memory in a computer system. As it is known in the art, computer processing systems include a central processing unit which operates on data stored in a main memory. Increased computer processing performance is often achieved by including a smaller, faster memory called a cache, between the central processing unit and the main memory for temporary storage of data. The cache reduces the delay or latency associated with memory access by storing subsets of the main memory data that can be quickly read from the cache and modified by the central processing unit.
Because computer processes commonly reference main memory data in contiguous address space, data is generally obtained from main memory and stored in cache in blocks. There are a variety of methods used to map blocks of data from main memory into the cache. Two typical cache arrangements include direct mapped caches and set associative caches.
In a conventional direct mapped cache, a block of data from memory is mapped into the cache using the lower bits of the memory address. The lower bits of the memory address are generally called the cache index. The upper bits of the memory address of the data block are generally called the ‘tag’ of that block. A tag store, which generally has a number of locations equivalent to the number of blocks in the cache, is used to store the tag of each block of data in the cache.
When a processor requires data from the cache it uses the associated address for accessing the tag store and compares the received tag to the upper bits of the memory address of the required data. If the data is not in the cache, the tag does not match the upper address bits and there is a “cache miss” occurrence. When there is a cache miss, a main memory read is performed to fill the cache with the required data. It is desirable to minimize the number of cache misses in order to avoid the latency incurred by the resulting memory reference.
Direct mapped caches are advantageous because they provide a cache system with minimal complexity. Also, because the addressing scheme is straightforward, the cache is able to quickly return data to the central processing unit. However, one drawback of direct mapped caches is that since there is only one possible location in the cache for numerous blocks of data having a common cache index, the miss rate is generally high. Thus, although direct mapped caches are able to quickly return data to the central processing unit, the performance is greatly reduced by the high miss rates inherent in the system.
Set associative caches serve to reduce the amount of misses by providing multiple cache locations for memory data having a common cache index. In set-associative caching, the cache is subdivided into a plurality of ‘sets’. Each set has an associated tag store for storing the tags of the blocks of data stored in the set. As in direct mapped caching, the location of a particular item within the cache is identified by a cache index derived from the lower bits of the memory address.
When the processor wants to fetch data from the cache, the cache index is used to address each of the sets and their associated tag stores. Each set outputs a data item located at the cache index to a large multiplexer. The associated tags are each compared against the upper bits of the main memory address to determine if any data item provided by the sets is the required data item. Assuming that the data item to be fetched is in one of the sets of the cache, the tag output by the tag store associated with that set will match the upper bits of the memory address. The multiplexer passes the data corresponding to the matched tag to the processor.
Set-associative cache mapping thus provides improved performance over a direct mapped cache by reducing the frequency of cache misses. However, the amount of time required to perform the set comparison makes the set-associative cache memory system a relatively slow system.
Computer systems typically implement either a direct mapped or set associative cache memory. Some prior art computer systems, however, have included a cache memory having the advantages of both set associative and direct mapped caches. Such caches use a RAM device to aid in the selection of an appropriate cache set containing a required data element. However these devices use a significant amount of semiconductor real estate and are limited to caches having a small number of sets.
It is therefore desirable to provide the same cache selection functionality using a smaller amount of semiconductor real estate and also allow the functionality to be scaleable for caches having large numbers of sets.
SUMMARY OF THE INVENTION
The invention resides in encoding tag addresses stored in a tag store such that a smaller representation of differences between each of selected ones of those tag addresses are stored in an associated memory, thus reducing the amount of integrated circuit area that memory requires. Further, the invention resides in a method and apparatus for determining the differences between each of the tag addresses and for comparing those differences to an encoded version of a tag address associated with a required data element such that one of many cache sets in a cache memory can be quickly selected and accessed.
One or more distinguishing bit RAMs store differences between encoded representations of tag addresses stored in an associated tag store. A comparison is performed between a selected difference value and the corresponding value of the encoded version of a tag address for a requested data element.
With such an arrangement, the distinguishing bit RAM(s) store smaller amounts of data thereby requiring smaller amounts of semiconductor real estate. Further, using a plurality of distinguishing bit RAMs allows the design to be scaleable for caches including large numbers of sets.


REFERENCES:
patent: 4894772 (1990-01-01), Langendorf
patent: 5509135 (1996-04-01), Steely, Jr.
patent: 5966737 (1999-10-01), Steely, Jr. et al.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Method and apparatus for accessing a cache memory... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Method and apparatus for accessing a cache memory..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and apparatus for accessing a cache memory... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2585131

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.