Methods and apparatus for improving system performance with...

Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S122000, C711S147000, C711S154000

Reexamination Certificate

active

06434672

ABSTRACT:

FIELD
The present invention relates to computer systems, and more particularly, but not by way of limitation, to methods and apparatus for improving computer system performance with a shared cache memory.
BACKGROUND
A cache memory is a high-speed memory unit interposed in the memory hierarchy of a computer system between the relatively slower main memory and the relatively faster processors to improve effective memory transfer rates thereby improving system performance. The name refers to the fact that the small cache memory unit is essentially hidden and appears transparent to the user, who is aware only of the larger main memory. The cache memory is usually implemented by semiconductor memory devices having speeds that are comparable to the speed of the processor, while the main memory utilizes a less costly, lower-speed technology. The cache memory concept anticipates the likely reuse by a processor of selected data in main memory by storing a copy of the selected data in the cache memory where it may be accessed by a processor request for it significantly quicker.
A cache memory typically includes a plurality of memory sections, wherein each memory section stores a block or a “line” of two or more words of data. For systems based on the particularly popular model 80486 microprocessor, a line consists of four “doublewords” (wherein each doubleword comprises four 8-bit bytes). Each line has associated with it an address tag that uniquely identifies which line of main memory it is a copy of.
In many computer systems, there may be several levels of cache memory. For example, each processor of a computer system may have one or more internal cache memories dedicated to that processor (these cache memories may be referred to as local cache memories). These dedicated cache memories may operate in a heirarchical fashion—i.e., first, the lowest level of cache memory is interrogated to determine whether it has the requested line of main memory and, if it is not there, the second lowest level of cache memory is then interrogated, and so forth. One or more processors, in turn, may share level of cache memory, and it is conceivable that one or more shared cache memories may themselves share another level of cache memory. At the highest level of memory is the main memory, which is inclusive of all of the layers of cache memory. (Note that main memory may also be referred to as system memory).
By way of illustration, consider the operation of a simple system having one processor, one level of cache memory, and main memory. When a read request originates in the processor for a line of data, an address tag comparison is made to determine whether a copy of the requested word resides in a line of the cache memory. If present, the data is used directly from the cache memory. This event is referred to as a cache read “hit.” If the data is not present, a line containing the requested word is retrieved from main memory and stored in the cache memory. The requested word is simultaneously provided to the processor. This event is referred to as a cache read “miss.”
In addition to using a cache memory to retrieve data, the processor may also write data directly to the cache memory instead of to the main memory. When the processor desires to write data to memory, an address tag comparison is made to determine whether the line into which data is to be written resides in the cache memory. If the line is present in the cache memory, the data is written directly into the line. This event is referred to as a cache write “hit.” In many systems a data “dirty bit” for the line is then set. The dirty bit indicates that data stored within the cache memory line is dirty or modified and is, therefore, the most up-to-date copy of the data. Thus, before the line is deleted from the cache memory or overwritten, the modified data must be written into main memory. This latter principle may be referred to as preserving cache coherency.
If the line into which data is to be written does not exist in the cache memory, the line is either fetched into the cache memory from main memory to allow the data to be written into the cache memory, or the data is written directly into the main memory. This event is referred to as a cache write “miss.”
In some cases, a cache memory may need to “castout” a line of data because of the limited amount of storage space inherent in cache memories. This castout data may be dirty or modified in which case it should not be discarded by the computer system. Thus, castout data is normally provided to the next higher level of cache memory (which may actually be the main memory) usually during a special set of bus cycles. This too preserves cache coherency.
Cache memories may operate under a variety of protocols, including the popular MESI (Modified, Exclusive, Shared, Invalid) protocol where data in a particular cache may be marked as dirty or modified, exclusive to the particular cache memory and main memory, shared between two or more cache memories, or an invalid line in the cache memory (which will result in a cache miss). More information regarding caching principles and techniques, including the MESI protocol, may be found in the various versions and volumes of the
Intel P
6
Family of Processors, Hardware Developer's Manual
all of which are hereby incorporated by reference.
Turning now to
FIG. 1
, there is shown a computer system
10
operating according to these conventional caching principles and techniques. In computer system
10
, processors
20
A-D each have a dedicated cache memories
30
A-D, respectively. Additionally, processors
20
A-B are operably connected to and share a shared cache memory
50
A through bus
40
A, while processors
20
C-D are operably connected to and share a shared cache memory
50
B through bus
40
B. Processors
20
A-B are symmetric agents on bus
40
A and shared cache memory
50
A is a priority agent on bus
40
A. Processors
20
C-D and shared cache memory
50
B operate in a similar fashion on bus
40
B. The shared cache memories
50
A and
50
B, in turn, act as symmetric agents on bus
60
and a memory subsystem
70
(comprising a memory controller
80
and main memory
90
) acts as a priority agent.
In operation, processor
20
A may, for example, issue a read or write request for a line of data located in main memory. Processor
20
A will first determine whether its dedicated cache memory
30
A contains the requested line. If so, the line is provided to the processor
20
A from its dedicated cache memory
30
A. If, however, the line of data requested is not present in dedicated cache memory
30
A, a “snoop” phase is initiated on bus
40
A to determine if the requested line is located in dedicated cache memory
30
B (belonging to processor
20
B) or in shared cache memory
50
A. During a snoop phase, other cache memories on bus
40
A, may issue signals if they have a copy of the requested line (e.g., by raising a HIT# signal) and what the condition of the line is (e.g., whether the line is dirty or modified, exclusive to that cache memory and main memory, or shared by that cache memory and one or more cache memories). If the line is located in a cache memory located on the bus
40
A, the line will be provided to dedicated cache
30
A where it may be cached. However, if the requested line is not located in any of the cache memories located on bus
40
B (including the shared cache memory
50
A), the shared cache memory
50
A must then initiate the read or write transaction on bus
60
(in effect “re-initiating” the original transaction) to access the requested line from main memory
90
. (In some cases of course, the shared cache memory
50
A will need to initiate a snoop phase on bus
60
to determine whether the requested line is in shared cache memory
50
B or some other cache memory on bus
60
). Main memory
90
will then respond to the line request and place the requested line of data on bus
60
. After several bus cycles, the requested line of data eventually makes its way to the dedicated cache memory
30
A of the requesting processor
2

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Methods and apparatus for improving system performance with... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Methods and apparatus for improving system performance with..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Methods and apparatus for improving system performance with... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2900969

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.