Storing a flushed cache line in a memory buffer of a controller

Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S121000

Reexamination Certificate

active

06460114

ABSTRACT:

BACKGROUND
The invention relates generally to computer memory systems and more particularly, but not by way of limitation, to a caching technique to improve host processor memory access operations.
In a typical computer system, program instructions and data are read from and written to system memory at random addresses. To combat this random nature of memory access operations level-
1
(L
1
) and level-
2
(L
2
) cache memories have been used to decrease the time, or number of clock cycles, a given processor must spend communicating with system memory during memory read and write operations.
Cache memories rely on the principle of access locality to improve the efficiency of processor-to-memory operations and, therefore, overall computer system performance. In particular, when a processor accesses system memory for program instructions and/or data, the information retrieved includes not only the targeted instructions and/or data, but additional bytes of information that surround the targeted memory location. The sum of the information retrieved and stored in the cache is known as a “cache line.” (A typical cache line may comprise 32 bytes.) The principle of access locality predicts that the processor will very probably use the additional retrieved bytes subsequent to the use of the originally targeted program instructions. During such operations as the execution of program loops, for example, information in a single cache line may be used multiple times. Each processor initiated memory access that may be satisfied by information already in a cache (referred to as a “hit”), eliminates the need to access system memory and, therefore, improves the operational speed of the computer system. In contrast, if a processor initiated memory access can not be satisfied by information already in a cache (referred to as a “miss”), the processor must access system memory—causing a new cache line to be brought into the cache and, perhaps, the removal of an existing cache line.
Referring to
FIG. 1
, many modern computer systems
100
utilize processor units
102
that incorporate small L
1
cache memory
104
(e.g., 32 kilobytes, KB) while also providing larger external L
2
cache memory
106
(e.g., 256 KB to 612 KB). As shown, processor unit
102
, L
1
cache
104
and L
2
cache
106
are coupled to system memory
108
via processor bus
110
and system controller
112
. As part of processor unit
102
itself, L
1
cache
104
provides the fastest possible access to stored cache line information. Because of its relatively small size however, cache miss operations may occur frequently. When a L
1
cache miss occurs, L
2
cache
106
is searched for the targeted program data and/or program instructions (hereinafter collectively referred to as data). If L
2
cache
106
contains the targeted data, the appropriate cache line is transferred to L
1
cache
104
. If L
2
cache
106
does not contain the targeted data, an access operation to system memory
108
(typically mediated by system controller
112
) is initiated. The time between processor unit
102
initiating a search for target data and the time that data is acquired or received by the processor unit (from L
1
cache
104
, L
2
cache
106
or memory
108
) is known as read latency. A key function of caches
104
and
106
is to reduce the processor unit
102
's read latency.
If L
1
cache
104
is full when a new cache line is brought in for storage, a selected cache line is removed (often referred to as flushed). If the selected cache line has not been modified since being loaded into L
1
cache
104
(i.e., the selected cache line is “clean”), it may be replaced immediately by the new cache line. If the selected cache line has been modified since being placed into L
1
cache
104
(i.e., the selected cache line is “dirty”), it may be flushed to L
2
cache
106
. If L
2
cache
106
is full when a L
1
cache line is brought in for storage, one of its cache lines is selected for replacement. As with L
1
cache
104
, if the selected cache line is clean it may be replaced immediately. If the selected cache line is dirty, however, it may be flushed to posted write buffer
114
in system controller
112
. The purpose of posted write buffer
114
is to provide short-term storage of dirty cache lines that are in the process of being written to system memory
108
. (Posted write buffers
114
are typically only large enough to store a few, e.g., 8, cache lines.)
While reasonably large by historical standards, the size of both L
1
cache
104
and L
2
cache
106
are small relative to the amounts of data accessed by modern software applications. Because of this, computer systems employing conventional L
1
and L
2
caches (especially those designed for multitasking operations) may exhibit unacceptably high cache miss rates. One effect of high cache miss rates is to increase the latency time of processor unit read operations. Thus, it would be beneficial to provide a mechanism to reduce the memory latency time experienced by host processor units.
SUMMARY
In one embodiment the invention provides a computer system comprising a processor, a level-
1
cache (operatively coupled to the processor), a level-
2
cache (operatively coupled to the processor), a system memory, and a system controller (operatively coupled to the processor, level-
1
cache, level-
2
cache and system memory), wherein the system controller has a memory buffer adapted to store cache lines flushed (cast out) from one or more processor caches. The memory buffer, referred to herein as a cast-out cache, may be configured as a set associative or fully associative memory and may comprise dynamic or static random access memory integrated into the system controller.
In another embodiment, the invention provides a method to control memory access transactions. The method includes receiving a memory access request signal from a device, identifying the device, selecting a cache structure based on the identified device, using the selected cache structure to satisfy the memory access request. The acts of selecting a cache structure and using the selected cache structure may comprise selecting a cache structure if the identified device is a processor unit, otherwise accessing a system memory to satisfy the memory request. Methods in accordance with the invention may be stored in any media that is readable and executable by a computer system.


REFERENCES:
patent: 5261066 (1993-11-01), Jouppi et al.
patent: 5524220 (1996-06-01), Verma et al.
patent: 5638534 (1997-06-01), Mote, Jr.
patent: 5778422 (1998-07-01), Genduso et al.
patent: 5893153 (1999-04-01), Tzeng et al.
patent: 5944815 (1999-08-01), Witt
patent: 6038645 (2000-03-01), Nanda et al.
patent: 6078992 (2000-06-01), Hum
patent: 6154816 (2000-11-01), Steely et al.
patent: 6195729 (2001-02-01), Arimilli et al.
patent: 6199142 (2001-03-01), Saulsbury et al.
patent: 6279080 (2001-08-01), DeRoo
patent: 0 470 736 (1992-02-01), None
patent: 0 657 819 (1995-06-01), None
patent: 0 681 241 (1995-11-01), None
patent: 0 800 137 (1997-10-01), None
patent: 2 215 887 (1989-09-01), None
Patterson, David A. and John L. Hennessy. Computer Architecture: A Quantitative Approach. Morgan Kaufman Publishers, Inc. 1996.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Storing a flushed cache line in a memory buffer of a controller does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Storing a flushed cache line in a memory buffer of a controller, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Storing a flushed cache line in a memory buffer of a controller will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2968746

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.