High performance cache directory addressing scheme for...

Electrical computers and digital processing systems: memory – Address formation – Combining two or more values to create address

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S128000, C711S119000

Reexamination Certificate

active

06192458

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Technical Field
The present invention relates in general to upgradeable caches in data processing systems and in particular to cache directory addressing schemes for upgradeable caches. Still more particularly, the present invention relates to a cache directory addressing scheme which reduces delay in the critical address path for upgradeable caches in data processing systems.
2. Description of the Related Art
Contemporary data processing systems commonly employ upgradeable caches for staging data from system memory to the processor(s) with reduced access latency. For example, a data processing system may be marketed with a 256 KB cache which is upgradeable to 512 KB, or a 2 MB cache upgradeable to 4 MB. The upgradeable cache then provides different price-per-performance points for a user purchasing a data processing system. In order to have common directory support for multiple cache sizes, traditional systems generally increase sector size when upgrading. Such cache upgrades are thus typically supported in a data processing system by permitting selection of different cache directory addressing schemes depending on the size of the cache. The different cache directory addressing schemes may rely on different cache line lengths, utilizing different address bits to select a cache line, to serve as the intra-cache line address, and/or to serve as an address tag. A traditional cache directory addressing scheme of the type currently utilized to support an upgradeable cache in a data processing system is depicted in FIG.
3
.
FIG. 3
depicts a cache directory addressing scheme for a 32 bit data processing system using a two-way set associative cache upgradeable from 1 MB to 2 MB. The 1 MB cache directory addressing configuration employs a 64 byte cache line. A cache line is the block of memory which a coherency state describes, also referred to as a cache block. When addressing a 1 MB cache, bits
26
-
31
(6 bits) of the address specify an intra-cache line address, bits
13
-
25
(13 bits) of the address are utilized as an index to a set of two cache lines in the cache directory and the cache memory, and bits
0
-
12
(13 bits) of the address are utilized as the cache line address tag to identify a particular cache line within the set of two. The index field specifies a row or congruence class within the cache directory and memory containing a set of two cache lines, the address tag field identifies a member of the specified congruence class (i.e. a particular cache line within the set of two cache lines), and the intra-cache line address field allows a particular byte to be selected from the identified congruence class member (cache line).
The 2 MB cache directory addressing configuration employs a 128 byte cache line with bits
25
-
31
(7 bits) of the address determining an intra-cache line address, bits
12
-
24
(13 bits) of the address being utilized as an index to the cache directory and the cache, and bits
0
-
11
(12 bits) of the address being utilized as the cache line address tag. In order to operate in the original system of 64 byte cache lines, the 128 byte cache line is sectored as two 64 byte cache lines. Thus, when upgrading the cache memory size, the index field is shifted down to increase the number of bits available for intra-cache line addressing within a larger cache line.
One problem with the approach to implementing a selectable cache directory addressing system of the type described above derives from the necessity of selecting different address bits to serve as the index field, depending on the size of the cache memory currently in place. Typically a multiplexer
302
is employed to selected which thirteen address bits, [
7
:
25
] or [
6
-
24
], are passed to the cache directory and memory to be utilized as the index for selection of a particular set of four cache lines. However, multiplexer
302
introduces a delay in getting the index field from the address to cache directory
308
to begin looking up the address. Cache memory
306
access is also critical, with delay similarly being introduced by multiplexer
302
in the look up of an indexed cache line.
In general, three critical paths may be identified within the mechanism depicted in FIG.
3
: from the address bus inputs Add[
13
-
25
] or Add[
12
-
24
] to cache data output
304
via cache memory
306
; from the address bus inputs to cache data output
304
via cache directory
308
; and from the address bus inputs to other logic (e.g., logic for victim selection or for driving a retry signal) at the outputs HIT_A and HIT_B of comparators
310
. Each of these critical paths includes multiplexer
302
and the attendant delay and space requirement. Moreover, multiplexers
312
between cache directory
308
and comparators
310
are required to determine whether address line [
12
] is compared to address tag [
12
] or to itself. These multiplexer requirements are necessary on both the processor side address flow within a cache and the snoop side address flow. Multiplexing of address bus lines to select the appropriate index field is also required for the address flow to address queues for loading addresses and pipeline collision detection. Thus, employing an upgradeable cache memory in a data processing system incurs a performance penalty over cache memories which cannot be upgraded.
It would be desirable, therefore, to provide a cache directory addressing scheme for variable cache sizes which does not include any additional gate delays in the critical address path. It would further be advantageous if the cache directory addressing scheme utilized did not require different sized address tags to be compared depending on the size of the cache memory employed.
SUMMARY OF THE INVENTION
It is therefore one object of the present invention to provide an improved upgradeable cache for use in data processing systems.
It is another object of the present invention to provide an improved cache directory addressing scheme for upgradeable caches.
It is yet another object of the present invention to provide a cache directory addressing scheme which reduces delay in the critical address path for upgradeable caches in data processing systems.
It is still yet another object of the present invention to further improve system performance via more associativity when upgrading caches.
The foregoing objects are achieved as is now described. To avoid multiplexing within the critical address paths, the same address field is employed as a index to the cache directory and cache memory regardless of the cache memory size. An increase in cache memory size is supported by increasing associativity within the cache directory and memory, for example by increasing congruence classes from two members to four members. For the smaller cache size, an additional address “index” bit is employed to select one of multiple groups of address tags/data items within a cache directory or cache memory row by comparison to a bit forced to a logic 1.
The above as well as additional objects, features, and advantages of the present invention will become apparent in the following detailed written description.


REFERENCES:
patent: 4797814 (1989-01-01), Brenza
patent: 5367653 (1994-11-01), Coyle et al.
patent: 5392410 (1995-02-01), Liu
patent: 5418922 (1995-05-01), Liu
patent: 5522056 (1996-05-01), Watanabe et al.
patent: 5680577 (1997-10-01), Aden et al.
patent: 5835928 (1998-11-01), Auslander et al.
patent: 5897651 (1999-04-01), Cheong et al.
patent: 5924128 (1999-07-01), Luick et al.
patent: 5943686 (1999-08-01), Arimilli et al.
patent: 4-127339 (1992-04-01), None
patent: 5-12119 (1993-01-01), None
patent: 1-280850 (1998-11-01), None
patent: 11060255 (1999-03-01), None
patent: 11060298 (1999-03-01), None

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

High performance cache directory addressing scheme for... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with High performance cache directory addressing scheme for..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and High performance cache directory addressing scheme for... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2569793

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.