Cache way prediction based on instruction base register

Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S128000, C711S137000, C711S204000, C711S213000

Reexamination Certificate

active

06643739

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
This invention relates to the field of computer systems, and in particular to a predictive n-way associative cache that uses the instruction base register as a predictor of the particular way in the cache that is likely to contain an addressed data item.
2. Description of Related Art
Cache systems are commonly used to reduce the effective delay associated with access to relatively slow memory devices. When a processor requests access to a particular data item in the slower memory, the cache system loads the requested data item into a higher speed memory. Thereafter, subsequent accesses to this same data item are provided via the higher speed memory, thereby avoiding the delay associated with the slower memory. Generally, a “line” of data items that contains the requested data item is loaded from the slower memory into the higher speed memory when the data item is requested, so that any data item within the loaded line can be subsequently provided by the higher speed memory.
The effectiveness of a cache memory access system is provided by the likelihood that future data accesses are related to prior data accesses. Generally, the likelihood of a requested data item being contained in the same line of cache as a prior requested data item is substantially higher than zero, and therefore the likelihood of satisfying the request from the higher speed cache memory is correspondingly substantially higher than zero.
Higher speed memory is more costly than slower speed memory, and therefore the amount of available cache memory is generally limited. Cache management schemes are used to determine which data items to remove from the higher speed memory when a new line of data needs to be loaded into the higher speed memory. A commonly used prioritization scheme for retaining data items in the higher speed memory is a “least recently used” (LRU) criteria, wherein the line of the least recently used (i.e. “older”) memory access is replaced by the new line, thereby retaining recently used/accessed data items. Other criteria, such as “most often used”, may also be used, typically in conjunction with the LRU prioritization scheme.
Associative caches are commonly used to store lines of data items based upon a subset of the address of the requested item.
FIG. 1
illustrates a conventional addressing scheme for an associative cache
100
. An address
110
, typically from a processor and discussed further below, is logically partitioned into a tag field
111
, an index field
112
, and a word field
113
. The index field
112
provides an index to an associated set of cache lines in a cache
120
. Each cache line of the set is termed a “way”, and the cache
100
corresponds to an n-way associative cache. The size of the word field
113
, j, corresponds to the size of a data line,
2
j
. That is, if there are sixteen words per data line, then the size of the word field
113
will be four-bits; if there are sixty four words per data line, then the word field
113
will be six-bits wide. Using this power-of-two relationship between the word field
113
and the size of the data line, the tag and index fields uniquely identify each data line in the memory.
When an addressed data item is loaded into the cache
120
from a slower memory (not shown), the line of data containing the data item is placed in a select way, the index field defining the location in the selected way for placing the data line. The selection of the way is effected using one of a variety of commonly available algorithms, such as the aforementioned LRU prioritization scheme. When the addressed data item is stored in a particular line area DLine-a, DLine-b, etc. in the cache
120
, the tag field
111
is also stored, as illustrated by fields Tag-a, Tag-b, etc. in FIG.
1
. The stored tag field, in combination with the data line's location within the way, corresponding to the data line's index field, uniquely identifies the data line that is stored in the cache
120
.
Before an addressed data item is loaded into the cache
120
, the cache
120
is checked to determine whether the data item is already located in the cache
120
, to potentially avoid having to load the data item from the slower memory. The addressed data item may be located in the cache due to a prior access to this data item, or, due to a prior access to a data item within the same line of data DLine-a, DLine-b, etc. as the currently addressed data item. The index field
112
defines the set of n-lines in the cache that are associated with this address. Each of the stored tags
121
a
,
121
b
, etc. corresponding to each of the stored lines
125
a
,
125
b
, etc. in the associated set is compared to the tag field
111
of the addressed data item, via the comparators
130
a
,
130
b
, etc. While this comparison is being made, each of the stored data lines
125
a
,
125
b
, etc. corresponding to the index field
113
are loaded into a high-speed buffer
140
, so as to be available if the data item is currently loaded in the cache.
If the addressed data item is currently loaded in the cache, the corresponding comparator
130
a
,
130
b
, etc. asserts a cache-hit signal, thereby identifying the particular way Hit-a, Hit-b, etc. that contains the data line. If a hit is asserted, the appropriate word is retrieved from the corresponding buffer
140
, using the word field
113
to select the appropriate word
141
a
,
141
b
, etc. from the data line contained in the buffer
140
. The retrieved word is forwarded to the processor that provided the address
110
. In a conventional embodiment of the cache system
100
, the time required to effect the comparison of the tag field
111
to the stored tag fields
121
a
,
121
b
, etc., and the subsequent selection of the appropriate word
141
a
,
141
b
, etc. when a cache-hit occurs, is substantially less than the delay time corresponding to the slower memory. In this manner, the effective access time to a data item is substantially reduced when the data item is located in the cache
120
.
If a cache-hit does not occur, the above described load of the addressed data line from memory into a select way, Way-a, Way-b, etc., of the cache
120
is effected, typically by loading the data line into the least recently used (LRU) way, or other prioritization scheme, as mentioned above.
The time required to store words, effectively from the processor to the memory, is similarly accelerated via use of the cache
120
. The presence of the addressed data item in the cache
120
is determined, using the above described comparison process. If the data item is currently located in the cache
120
, the new value of the data item from the processor replaces the select word, or words, of the buffer
140
, and the buffer
140
is loaded into the data line
125
a
,
125
b
, etc. containing the addressed data item. The “modified” field
129
is used to signal that the contents of a cached line have changed. Before a data line is overwritten by a new data line, the modified field
129
is checked, and, if the data line has been modified, the modified data line is stored back into the memory, using the stored tag field
121
a
,
121
b
, etc. to identify the location in memory to store the line.
Although an n-way associative cache provides an effective means for increasing the effective memory access speed, the simultaneous way-comparison scheme, wherein the tag of the addressed data item is compared to all of the stored tags, consumes energy at a rate that is n-times higher than a one-way associative cache. It is not uncommon for n-way associative caches to be substantially hotter than other areas of an integrated circuit, or printed circuit boards.
To reduce the power consumption of a conventional n-way associative cache, predictive techniques are applied to select a likely way corresponding to a given address. In a conventional embodiment of a way prediction scheme, the likely way is first checked for the addressed data item, and only if that way does not contain the addressed data item, are

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Cache way prediction based on instruction base register does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Cache way prediction based on instruction base register, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Cache way prediction based on instruction base register will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3169942

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.