Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories
Reexamination Certificate
1999-02-24
2002-07-23
Yoo, Do Hyun (Department: 2187)
Electrical computers and digital processing systems: memory
Storage accessing and control
Hierarchical memories
C711S128000, C711S137000, C711S140000, C712S222000, C712S238000, C714S006130, C714S805000, C714S763000
Reexamination Certificate
active
06425055
ABSTRACT:
FIELD OF THE INVENTION
The present invention relates to the field of data processing and more particularly to method and apparatus for caching data in a data processing system.
BACKGROUND OF THE INVENTION
Cache memories are relatively small, high-speed memories used to reduce memory access time in modern computer systems. The idea is to store data from frequently accessed regions of system memory in cache memory so that subsequent accesses to the cached regions will not incur the full system memory access time, but the shorter cache access time instead. A memory transaction that accesses cache memory instead of system memory is called a cache hit, and the cache “hit rate” is a fundamental metric of cache design.
FIG. 1
illustrates a prior art cache memory
12
that includes a data store
14
and a tag store
16
. In effect, the cache memory
12
is a data buffer in which each entry in the data store
14
is mapped to a region of system memory by a corresponding entry in the tag store
16
. When an address is asserted to system memory, set and tag fields within the address are used to determine whether an entry in the cache memory
12
is mapped to the region of system memory sought to be accessed. The set field (sometimes called an index) is decoded to select an entry in the data store
14
and a corresponding entry in the tag store
16
. An address value, called a “tag,” is output from the selected tag store entry and compared with the tag field of the asserted address. If the tag field of the asserted address matches the tag output from the selected tag store entry, a cache hit is signaled to indicate that the selected entry in the data store is mapped to the region of system memory sought to be accessed. In the case of a memory read operation, a cache line (i.e., the unit of information in a cache) is output from the selected entry in the data store and returned to the requestor. Low order bits of the input address may be used to select a sub-portion of the cache line according to the width of the transfer path to the requestor and the width of data that can be handled by the requester. Write requests are handled similarly, except that data is written to the selected entry in the data store
14
.
The cache memory
12
is referred to as a direct mapped cache because only one cache line is stored in the cache for each possible value of the set field. That is, system memory is directly mapped to the cache based on the set field so that there is only one tag field in the tag store
16
per value of the set field. One undesirable consequence of direct mapping is that a cache miss will occur in response to each new memory address for which the set field, but not the tag field, matches a previously asserted address. Thus, if a sequence of memory accesses are directed to system memory addresses that have the same set fields but different tag fields, a significant number of cache misses will occur and data from the different system memory addresses will be frequently swapped into and out of the cache memory
12
; a phenomenon called “thrashing.” An alternate mapping scheme, called multiple-way, set associative mapping, is used to avoid this sort of thrashing.
FIG. 2
illustrates a prior-art four-way, set associative cache memory
26
in which each set field is mapped to as many as four system memory addresses. Instead of a single data store, there are four data stores (
28
A-
28
D), called “data ways,” and instead of a single tag store, there are four tag stores (
30
A-
30
D), called “tag ways.” In effect, the direct mapped operation described above occurs in parallel for each of the four data ways and four tag ways. When a memory address is received, the set field is used to select a respective cache line from each of the four data ways and also to select a respective tag from each of the four tag ways. Each of the selected tags is compared against the tag field of the input cache address to generate a corresponding tag way hit signal. The tag way hit signals are input to hit logic
31
which asserts or deasserts a cache hit signal based on whether any of the tag way hit signals indicates a match. Assuming a cache hit, the hit logic generates a data way select signal that indicates which of the tag ways contains the tag matching the tag field of the input address. The data way select signal is supplied to a multiplexer
32
to select the source of the cache line output to be the data way that corresponds to the tag way containing the tag matching the tag field.
Because the same set field is associated with multiple tag addresses in a multiple-way, set associative cache memory, the type of thrashing that can occur in direct mapped caches is usually avoided. Consequently, a multiple-way, set associative cache tends to achieve a higher hit rate than a direct mapped cache having the same sized data store. The higher hit rate is not without cost, however, because the increased logic required to generate the way select signal and to select one of the plurality of set-field-selected cache lines increases the overall time required to output a cache line. This is in contrast to a direct mapped cache which outputs a cache line as quickly as the set field can be decoded and the selected cache line can be driven onto the return data path.
SUMMARY OF THE INVENTION
An apparatus and method for accessing a cache memory are disclosed. A memory address is asserted that includes a set field and a tag field that together uniquely identify a region of system memory equal in size to a cache line in a cache memory. A partial tag field that includes less than all bits in the tag field is compared against a partial tag entry stored in the cache memory. A cache line is output from the cache memory if the partial tag field matches the partial tag entry.
REFERENCES:
patent: 4888773 (1989-12-01), Arlington et al.
patent: 4899342 (1990-02-01), Potter et al.
patent: 4920539 (1990-04-01), Albonesi
patent: 5127014 (1992-06-01), Raynham
patent: 5233616 (1993-08-01), Callander
patent: 5263032 (1993-11-01), Porter et al.
patent: 5267242 (1993-11-01), Lavallee et al.
patent: 5274645 (1993-12-01), Idleman et al.
patent: 5274646 (1993-12-01), Brey et al.
patent: 5325375 (1994-06-01), Westberg
patent: 5367526 (1994-11-01), Kong
patent: 5388108 (1995-02-01), DeMoss et al.
patent: 5392302 (1995-02-01), Kemp et al.
patent: 5428630 (1995-06-01), Weng et al.
patent: 5430742 (1995-07-01), Jeddeloh et al.
patent: 5510934 (1996-04-01), Brennan et al.
patent: 5526504 (1996-06-01), Hsu et al.
patent: 5537538 (1996-07-01), Bratt et al.
patent: 5717892 (1998-02-01), Oldfield
patent: 5809524 (1998-09-01), Singh et al.
patent: 5835928 (1998-11-01), Auslander et al.
patent: 5845320 (1998-12-01), Pawlowski
patent: 5845323 (1998-12-01), Roberts et al.
patent: 5848428 (1998-12-01), Collins
patent: 5931943 (1999-08-01), Orup
patent: 5956746 (1999-09-01), Wang
patent: 6016533 (2000-01-01), Tran
patent: 6016545 (2000-01-01), Mahalingaiah et al.
Steven A. Przybylski, New Dram Technologies, © 1994, 1996. Publisher; MicroDesign Resources; Sebastopol, CA. pp. 1-306.
R8000 Microprocessor Chip Set; Product Overview, Webpage [online].Silicon Graphics. [Retrieved on Oct. 24, 1998] Retrived from the Internet; wysiwg://16/http://www.sgi.com/processors/r8000/product/book.html #55554 pp. 1-26.
Peter Yan-Tek Hsu, Design of the R8000 Microprocessor. Mountian View, CA; Jun. 2, 1994. pp. 1-16. Silicon Graphics, MIPS Group.
Hinton Glenn J.
Sager David J.
Blakely , Sokoloff, Taylor & Zafman LLP
Intel Corporation
Namazi Mehdi
Yoo Do Hyun
LandOfFree
Way-predicting cache memory does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Way-predicting cache memory, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Way-predicting cache memory will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2817557