Cache memory capable of reducing area occupied by data...

Electrical computers and digital processing systems: memory – Addressing combined with specific memory configuration or... – Addressing cache memories

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S128000

Reexamination Certificate

active

06763422

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to a cache memory to be connected to an MPU (Micro Processing Unit) and more particularly to a data memory in the cache memory.
2. Description of the Related Art
In general, the cache memory is provided between an arithmetic unit such as the MPU and a memory system serving as a main memory and performs a function of bridging a gap in a processing speed that occurs between the arithmetic unit and the memory system. The cache memory has a tag memory used to store data on addresses of the memory system and a data memory used to temporarily store part of data contained in the memory system as cache data. In the data memory, as is well known, desired cache data is read in one cycle and a predetermined amount of data referenced by the memory system is written in another one cycle. By these operations, a waiting cycle time of the MPU is reduced, thereby achieving high-speed operations between the cache memory and the MPU.
FIG. 2
is a schematic block diagram showing configurations of a data memory in a conventional cache memory employing a set-associative method. In the example shown in
FIG. 2
, the data memory in the conventional cache memory employs a four-way set-associative method in which unitized data memory macro units
10
-
13
,
20
-
23
,
30
-
33
and
40
-
43
used to manage cache data are provided in four ways “0-3”, respectively. In
FIG. 2
, configurations of the data memory macro units
20
-
23
mounted in the way
1
, the data memory macro units
30
-
33
mounted in the way
2
and the data memory macro units
40
-
43
mounted in the way
3
are the same as the data memory macro units
10
-
13
mounted in the way “0”. Write data D
0
-D
3
are values obtained by selection of multiplexers
50
-
53
from word data stored in a line buffer
1
or from MPU write data, which are input, as appropriate, to the data memory macro units
10
-
43
. One of read data which are outputted from each of the data memory macro units
10
-
43
and are selected by multiplexers
60
-
63
and a multiplexer
70
is outputted to the MPU.
In the data memory in the cache, the data memory macro unit described above is provided to every word in all the ways (in the example shown in
FIG. 2
, the number of the words is four) so that one final read data can be outputted in one cycle. Each of the data memory macro units
10
to
43
is configured so as to be simultaneously accessed. At the time of reading of data, one data can be simultaneously from each of the data memory macro units
10
to
43
by inputting an address fed from the MPU to an address terminal A of each of all the data memory macro units
10
to
43
and by inputting chip enable signals
0
to
3
[0:3] having been asserted to a chip enable input terminal CE of each of the data memory macro units
10
to
43
. Here, “[0:3]” denotes the chip enable signals [0] to [3]. In the data memory, one required data is finally selected from the data read from the data macro units
10
to
43
, based on a word address contained in the addresses fed from the MPU and a way number, in which a cache hit has been found, fed from the tag memory. The final data selected as above is fed to the MPU.
Moreover, writing of data to each of the data memory macro units
10
to
43
is carried out when a request for writing is fed from the MPU or when a cache miss occurs due to absence of required data in the data memory. However, in the case of the occurrence of the cache miss, the above writing is carried out after the data read from the memory system have been stored in all the word data areas
0
to
3
in the line buffer
1
as shown in FIG.
2
. When data are stored in all the word data areas in the line buffer
1
, the writing of data is carried out to all the data memory macro units in any one of the ways
0
to
3
. For example, when the writing of the data is performed in the way 0, in order to write all word data simultaneously in one cycle, then address fed from the MPU is input to the address input terminals A of each of the data memory macro units
10
to
13
and, at the same time, each of the write data D
0
to D
3
is input to the data input terminals D of each of the data memory macro units
10
to
13
. Moreover, by inputting each of the chip enable signals
0
[0:3] having been asserted to each of chip enable input terminals CE of all the word data areas in the way
0
and by inputting each of write enable signals
0
[0:3] having been asserted to each of write enable input terminals WE of all the word data areas in the way
0
, all the word data can be written simultaneously to the data memory macro units
10
to
13
in the way
0
.
FIG. 3
is a diagram explaining a conventional format of an address fed from the MPU. In the cache memory, the address outputted from the MPU is used in a state where the address is divided into four portions including a tag data portion X
1
, index address portion X
2
, word address portion X
3
, and byte address portion X
4
. The tag data portion X
1
is the data to be stored in the tag memory in the cache. The address of the data memory by which an access is required by the MPU is compared with effective data in the tag memory and, when both of them match each other, the cache hit occurs. The index address portion X
2
is bit strings indicating a predetermined line position in each of the ways in the cache memory. The word address portion X
3
is bit strings indicating a predetermined word position in a predetermined line. The byte address portion X
4
is bit strings indicating a predetermined byte position in a predetermined word.
FIG. 4
is a diagram explaining a conventional data storing position in each of the data memory macro units
10
-
43
contained in each of the ways
0
to
3
. For example, each of the data memory macro units
10
to
13
stores data corresponding the data 0 to 3 in the word address portion X
3
as shown in FIG.
3
. As each of physical memory addresses of the data memory macro units
10
-
13
in the way
0
, that is, each of the cache memory address, the same number as used in the index address portion X
2
is employed. Similarly, as each of physical memory addresses of the data memory macro units
20
to
43
in the ways
1
to
3
, the same number as used in the index address portion X
2
is employed. Examples of the data storing positions at the time of reading and writing are shown by shaded areas in FIG.
4
. At the time of reading, if the address requested for reading by the MPU is, for example, “0” for the index address and “2” for the word address, data of (x, 0, z) (x 0 to 3, z 0 to 3) containing data (x, 0, 2) is read as candidate data, as shown in FIG.
4
. Out of these candidate data, one data for each of the ways
0
to
3
is selected and, further, out of the data selected for each of the ways
0
to
3
, final read data is selected and read. Moreover, at the time of writing, if the index address of the read miss address caused by the cache miss is “511” and if the way to be written is “0”, data stored in the line buffer
1
is written to a place corresponding to the positions (0, 511, z) as shown in FIG.
4
.
FIG. 5
is a diagram explaining an example of a conventional floor plan for an LSI (Large Scale Integrated Circuit) having a cache memory. In
FIG. 5
, a TAG memory section
81
of the cache memory, an MPU
82
, a control section
83
, and a data memory section
84
of the cache memory are shown. A size of a die
80
indicates outer dimensions of the LSI chip. In the example of
FIG. 5
, the data memory section
84
has 16 pieces of data memory macro units
85
. Each of the data memory macro units
85
is unitized, that is, is operating as a separate unit, which corresponds to each of the 16 pieces of the data memory macro units
10
to
13
,
20
to
23
,
30
to
33
, and
40
to
43
shown in FIG.
2
.
FIG. 6
is a time chart explaining operations of the conventional data memory macro units
10
to
43
at the time

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Cache memory capable of reducing area occupied by data... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Cache memory capable of reducing area occupied by data..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Cache memory capable of reducing area occupied by data... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3237586

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.