Dual cache with multiple interconnection operation modes

Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S118000, C711S122000

Reexamination Certificate

active

06397297

ABSTRACT:

FIELD OF THE INVENTION
The present invention generally relates to computer systems. More particularly, the present invention relates to a method and apparatus of improving performance in computer systems by arranging cache modules in several interconnected operational modes.
BACKGROUND OF THE INVENTION
A cache or cache module as used interchangeably throughout this specification, is intended to enhance the speed at which information and data are retrieved. A main memory typically stores a large amount of data which is time consuming to retrieve. The cache module contains a copy of portions of the main memory. When a processor attempts to read a word of memory, a check is made to determine if the word is in the cache module. If so, the word is delivered to the processor. If not, a block of main memory, consisting of some fixed number of words, is read into the cache module and then the word is delivered to the processor.
The main memory consists of up to 2
n
addressable words, with each word having a unique n-bit address. For mapping purposes, this memory is considered to consist of a number of fixed-length blocks of K words each. That is, there are M=2
n
/K blocks. The cache module consists of C lines of K words each, and the number of lines is considerably less than the number of main memory blocks.
FIG. 1
is a block diagram illustrating a simplified picture of a network involving a processor
12
with cache module
40
connected via address, control and data lines
43
,
44
, and
45
, respectively. Address and data lines
43
and
45
also attached to address and data buffers
41
and
42
, respectively which attached to system bus
20
from which main memory (not shown) is reached.
Typically, processor
12
generates an address of a word to be read. If a “hit” occurs, (the word is contained in cache module
40
), the word is delivered to processor
12
. When this cache hit occurs, the data and address buffers
42
and
41
, respectively, are disabled and communication is only between the processor
12
and the cache module
40
, with no system bus traffic. When a cache “miss” occurs, (the word is not contained in cache module
40
), the desired address is loaded from main memory (not shown) onto system bus
20
and the data is returned through data buffer
42
to both the cache module
40
and the main memory. With a cache miss, a line in the cache may be overwritten or copied out of cache module
40
when new data is stored in the cache module. This overwritten line is referred to as a “victim block” or a “victim line.”
The basic structure of a conventional multi-processor computer system
10
employing several cache modules is shown in FIG.
2
. Computer system
10
includes processors
12
,
120
and
220
as shown which are connected to various peripheral devices including input/output (I/O) devices
14
(such as a display monitor, keyboard, graphical pointer (mouse) and a permanent storage device (hard disk)), memory
16
(such as random access memory or RAM) that is used by processors
12
,
120
and
220
to carry out program instructions, and firmware
18
whose primary purpose is to seek out and load an operating system from one of the peripherals (usually the permanent memory device) whenever computer system
10
is first turned on. Processors
12
,
120
and
220
communicate with the peripheral devices by various means, including a generalized interconnect or system bus
20
, or direct-memory-access channels (not shown).
Processor
12
, as well as each of the other processors
120
and
220
, includes a processor core
22
having a plurality of registers and execution units, which carry out program instructions
13
in order to operate the computer system
10
. As shown, processor
12
further includes one or more cache modules, such as an instruction cache
24
and a data cache
26
, which are implemented using high-speed memory devices. As described above, cache modules are commonly used to temporarily store values that might be repeatedly accessed by the processor, in order to speed up processing by avoiding the longer step of loading the values from memory
16
. These cache modules are referred to as “on-board” when they are integrally packaged with the processor core on a single integrated chip
28
. Each cache module is associated with a cache controller (not shown) that manages the transfer of data and instructions between the processor core
22
and the cache.
Processor
12
can include additional cache modules, such as cache module
30
, which is referred to as a level
2
(L
2
) cache since it supports the on-board (level
1
) caches
24
and
26
. In other words, cache module
30
acts as an intermediary between memory
16
and the on-board caches, and can store a much larger amount of information (instructions and data) than the on-board caches can, but at a longer access penalty. Cache module
30
is connected to system bus
20
, and all loading of information from memory
16
into processor core
22
comes through cache module
30
.
One drawback to the conventional cache module arrangement as shown is that the cache modules do not benefit from being interconnected. Without the cache modules being interconnected, it is inefficient to retrieve data since each cache must be searched individually if data is not found in the first cache that is searched.
Accordingly, what is needed is an effective and efficient method for directly connecting cache modules for retrieval of information.
SUMMARY OF THE INVENTION
In accordance with an embodiment of the present invention, a computer system having cache modules interconnected in series includes a first and a second cache module directly coupled to an address generating line for parallel lookup of data and data conversion logic coupled between the first cache module and the second cache module.


REFERENCES:
patent: 4395754 (1983-07-01), Feissel
patent: 4707784 (1987-11-01), Ryan et al.
patent: 5210845 (1993-05-01), Crawford et al.
patent: 5237673 (1993-08-01), Orbits et al.
patent: 5537575 (1996-07-01), Foley et al.
patent: 5699552 (1997-12-01), Whittaker
patent: 5787471 (1998-07-01), Inoue et al.
patent: 6012108 (2000-01-01), Kang
patent: 6081844 (2000-06-01), Nowatzyk et al.
patent: 6085288 (2000-07-01), Arimilli et al.
patent: 6167489 (2000-12-01), Bauman et al.
J.G. Brenza: Second Level Cache Fast Access, IBM Technical Disclosure Bulletin, Mar. 1, 1984, pp. 5488-5490.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Dual cache with multiple interconnection operation modes does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Dual cache with multiple interconnection operation modes, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Dual cache with multiple interconnection operation modes will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2878797

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.