Error detection/correction and fault detection/recovery – Data processing system error or fault handling – Reliability and availability
Reexamination Certificate
1998-02-17
2002-11-12
Sheikh, Ayaz (Department: 2155)
Error detection/correction and fault detection/recovery
Data processing system error or fault handling
Reliability and availability
C714S763000, C714S767000, C714S768000, C714S769000, C714S770000, C714S804000, C714S805000
Reexamination Certificate
active
06480975
ABSTRACT:
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention generally relates to computer systems, and more particularly to a method of improving the performance of a cache used by a processor of a computer system, by reducing delays associated with parity checks and error correction codes.
2. Description of the Related Art
The basic structure of a conventional computer system
10
is shown in FIG.
1
. Computer system
10
may have one or more processing units, two of which
12
a
and
12
b
are depicted. Processing units
12
a
and
12
b
are connected to various peripheral devices including input/output (I/O) devices
14
(such as a display monitor, keyboard, and permanent storage device), memory device
16
(such as random access memory or RAM) that is used by the processing units to carry out program instructions, and firmware
18
whose primary purpose is to seek out and load an operating system from one of the peripherals (usually the permanent memory device) whenever the computer is first turned on. Processing units
12
a
and
12
b
communicate with the peripheral devices by various means, including a generalized interconnect or bus
20
. Computer system
10
may have many additional components which are not shown, such as serial and parallel ports for connection to, e.g., modems or printers. Those skilled in the art will further appreciate that there are other components that might be used in conjunction with those shown in the block diagram of
FIG. 1
; for example, a display adapter might be used to control a video display monitor, a memory controller can be used to access memory
16
, etc. Also, instead of connecting I/O devices
14
directly to bus
20
, they may be connected to a secondary (I/O) bus which is further connected to an I/O bridge to bus
20
. The computer can have more than two processing units.
In a symmetric multi-processor (SMP) computer, all of the processing units are generally identical, that is, they all use a common set or subset of instructions and protocols to operate, and generally have the same architecture. A typical architecture is shown in
FIG. 1. A
processing unit includes a processor core
22
having a plurality of registers and execution units, which carry out program instructions in order to operate the computer. An exemplary processing unit includes the PowerPC™ processor marketed by International Business Machines Corp. The processing unit can also have one or more caches, such as an instruction cache
24
and a data cache
26
, which are implemented using high speed memory devices. Caches are commonly used to temporarily store values that might be repeatedly accessed by a processor, in order to speed up processing by avoiding the longer step of loading the values from memory
16
. These caches are referred to as “on-board” when they are integrally packaged with the processor core on a single integrated chip
28
. Each cache is associated with a cache controller (not shown) that manages the transfer of data between the processor core and the cache memory.
A processing unit
12
can include additional caches, such as cache
30
, which is referred to as a level 2 (L2) cache since it supports the on-board (level 1) caches
24
and
26
. In other words, cache
30
acts as an intermediary between memory
16
and the on-board caches, and can store a much larger amount of information (instructions and data) than the on-board caches can, but at a longer access penalty. For example, cache
30
may be a chip having a storage capacity of 256 or 512 kilobytes, while the processor may be an IBM PowerPC™ 604-series processor having on-board caches with 64 kilobytes of total storage. Cache
30
is connected to bus
20
, and all loading of information from memory
16
into processor core
22
usually comes through cache
30
. Although
FIG. 1
depicts only a two-level cache hierarchy, multi-level cache hierarchies can be provided where there are many levels of interconnected caches.
A cache has many “blocks” which individually store the various instructions and data values. The blocks in any cache are divided into groups of blocks called “sets” or “congruence classes.” A set is the collection of cache blocks that a given memory block can reside in. For any given memory block, there is a unique set in the cache that the block can be mapped into, according to preset mapping functions. The number of blocks in a set is referred to as the associativity of the cache, e.g., 2-way set associative means that for any given memory block there are two blocks in the cache that the memory block can be mapped into; however, several different blocks in main memory can be mapped to any given set. A 1-way set associate cache is direct mapped, that is, there is only one cache block that can contain a particular memory block. A cache is said to be fully associative if a memory block can occupy any cache block, i.e., there is one congruence class, and the address tag is the full address of the memory block.
An exemplary cache line (block) includes an address tag field, a state bit field, an inclusivity bit field, and a value field for storing the actual instruction or data. The state bit field and inclusivity bit fields are used to maintain cache coherency in a multiprocessor computer system (indicating the validity of the value stored in the cache). The address tag is a subset of the full address of the corresponding memory block. A compare match of an incoming address with one of the tags within the address tag field indicates a cache “hit.” The collection of all of the address tags in a cache is referred to as a directory (and sometimes includes the state bit and inclusivity bit fields), and the collection of all of the value fields is the cache entry array.
When all of the blocks in a congruence class for a given cache are full and that cache receives a request, whether a “read” or “write,” to a memory location that maps into the full congruence class, the cache must “evict” one of the blocks currently in the class. The cache chooses a block by one of a number of means known to those skilled in the art (least recently used (LRU), random, pseudo-LRU, etc.) to be evicted. If the data in the chosen block is modified, that data is written to the next lowest level in the memory hierarchy which may be another cache (in the case of the L1 or on-board cache) or main memory (in the case of an L2 cache, as depicted in the two-level architecture of FIG.
1
). By the principle of inclusion, the lower level of the hierarchy will already have a block available to hold the written modified data. However, if the data in the chosen block is not modified, the block is simply abandoned and not written to the next lowest level in the hierarchy. This process of removing a block from one level of the hierarchy is known as an “eviction”. At the end of this process, the cache no longer holds a copy of the evicted block.
FIG. 2
illustrates the foregoing cache structure and eviction process. A cache
40
(L1 or a lower level) includes a cache directory
42
, a cache entry array
44
, an LRU array
46
, and control logic
48
for selecting a block for eviction from a particular congruence class. The depicted cache
40
is 8-way set associative, and so each of the directory
42
, cache entry array
44
and LRU array
46
has a specific set of eight blocks for a particular congruence class as indicated at
50
. In other words, a specific member of the congruence class in cache directory
42
is associated with a specific member of the congruence class in cache entry array
44
and with a specific member of the congruence class in LRU array
46
, as indicated by the “X” shown in congruence class
50
.
A bit in a given cache block may contain an incorrect value, either due to a soft error (a random, transient condition caused by, e.g., stray radiation or electrostatic discharge) or to a hard error (a permanent condition, e.g., defective cell). One common cause of errors is a soft error resulting from alpha radiation emitted by the lead in the solder (C
4
) bumps used to f
Arimilli Ravi Kumar
Dodson John Steven
Lewis Jerry Don
Bracewell & Patterson L.L.P.
Emile Volel
International Business Machines - Corporation
Jean Frantz B.
Sheikh Ayaz
LandOfFree
ECC mechanism for set associative cache array does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with ECC mechanism for set associative cache array, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and ECC mechanism for set associative cache array will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2917223