High speed LRU line replacement system for cache memories

Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S133000, C711S134000, C711S144000, C711S159000, C711S160000

Reexamination Certificate

active

06745291

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to high speed computers (CPU) of the type having cache memories and cache controllers for speeding up access to data and instructions stored in memories. More particularly, the present invention relates to a novel expanded cache memory and a memory table for identifying the least recently used (LRU) line of data in a cache memory for replacement when a miss occurs.
2. Description of the Prior Art
Cache memories are well known and have been employed by IBM in some of their Series 360 line of computers since the 1960's as well as by Intel in their 486 microprocessor chips and Pentium chips.
Intel chips have employed two levels of cache memories. The L1 cache level memory is made on the same chip as the CPU and access to data and instructions has been substantially increased. Now that the Level
2
cache memories have been placed between the CPU and the main memories even greater access speeds have been achieved. The trend is to larger and faster microprocessor chips; thus, the trend extends to larger cache memories to accommodate the even larger RAM memories that accompany these chips. As technology keeps advancing, the trend has been to increase the sizes and speed of cache memories and to expand their application. There are many practical trade-offs in cache memory design, but as the speed of computers keeps increasing, there is intense pressure to increase cache memory sizes to improve the effective hit rate of data capture which in effect, decreases the average access time to date. However, because a cache memory is smaller than the actual memory being referenced, and very high speed cache designs are typically set associative, there is a significant tradeoff in design involving the sizes and number of sets in the set associative memory. It is possible to reach a point in a set associative cache design where the increased number of sets used in the cache memory can slow the access time regardless of the size of the cache memory. An example of a two-way set associative data cache as employed in Pentium microprocessors will illustrate the point. The data cache comprises two ways or banks of cache memory wherein each has 128 lines of 32 bytes of data each. When a miss occurs in the cache memory, a whole new line of data must be obtained from the L2 cache or from main memory and written over the least recently used (LRU) line in one of the two ways. The logic employed assumes that seldom if ever will the program being run call for use of the same line number from three different pages of the two million pages of data in memory. However, this is not always the case and each time that the same three lines are needed there are only two available and only two can possibly reside in a two-way cache. If the same third line is not in cache memory, the cache controller must issue a line fill request and fetch the missing line then write the line into the way whose line was least recently used. Thus, it is possible to throw out one line that will be needed in the short term because the two-way set associative data cache only permits two lines with the same line number even though the pages from which they came are numbered in the millions of pages.
Accordingly, it would be ideal to provide an “N ” way set associative data cache memory that eliminates the problems of two-way set associative data cache memories without introducing delays and other penalties that would slow access time or increase the cost of the cache memory substantially which would be a consequence of the “N” way associativity.
SUMMARY OF THE INVENTION
It is a principal object of the present invention to provide an N-way set associative data cache memory.
It is a principal object of the present invention to provide a high speed LRU look-up table for producing the least recently used line in a set associative data cache memory.
It is another principal object of the present invention to provide a modification to widely used data cache memories that permits expansion of the main memory as well as the data cache memory without incurring access time penalties.
According to these and other objects of the present invention there is provided an N-way set associative data cache memory with N-tag directories and N-ways or banks each having M-lines that may be accessed by a line address and confirmed by a directory tag address to determine a hit or miss in the cache memory. In the event of a miss, there is provided a novel look-up table that produces the least recently used line in one of the N-ways concurrent in time with the operation of fetching of a new line of data from an L2 cache or a main memory.


REFERENCES:
patent: 4463424 (1984-07-01), Mattson et al.
patent: 6230219 (2001-05-01), Fields et al.
patent: 6240489 (2001-05-01), Durham et al.
patent: 6393525 (2002-05-01), Wilkerson et al.
patent: 6434671 (2002-08-01), Chung

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

High speed LRU line replacement system for cache memories does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with High speed LRU line replacement system for cache memories, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and High speed LRU line replacement system for cache memories will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3353159

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.