Memory controller and a cache for accessing a main memory,...

Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S105000, C711S159000

Reexamination Certificate

active

06542969

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to a memory controller, a cache device, a memory control system, a memory control method, and a recording medium, particularly to a technique for controlling a storage device in which data I/O is made fast.
2. Description of the Related Art
In recent years, as the clock speeds of CPUs in computer systems become higher or the processing speeds of various other electronic circuits become higher, high-speed interfaces are required. For this purpose, by exploiting the fact that addresses of a storage device successively output from a CPU are mostly near to each other, DRAMs (Dynamic Random Access Memories) after a high-speed DRAM use a function of keeping an area, which has been activated in a memory cell array, active for a while to achieve the faster subsequent access to the area.
FIG. 1
shows a schematic arrangement of a DRAM. Referring to
FIG. 1
, a memory cell array
35
comprises a plurality of word lines, a plurality of bit lines perpendicular to the word lines, and a plurality of memory cells located at the intersections of the word and bit lines. Upper bits of an address externally input to an address buffer
31
indicate a row address, and lower bits thereof indicate a column address. The row address is held in a row address buffer
32
, and the column address is held in a column address buffer
33
.
The row address held in the row address buffer
32
is decoded by a row decoder
34
to select one word line in the memory cell array
35
. In a readout operation, data in the respective memory cells connected to the selected word line are read out onto the corresponding bit lines as small voltages, which are amplified by a sense amplifier
36
.
The column address held in the column address buffer
33
is decoded by a column decoder
37
to open the column gate for one bit line corresponding to the decoded column address. Data on the thus selected bit line is output onto a common line
40
through the opened column gate. In a readout operation, the thus obtained data DQ is externally output through a data I/O buffer
38
.
In a writing operation, data DQ externally input through the data I/O buffer
38
is supplied to a bit line in the memory cell array
35
through the common line
40
and the corresponding column gate selected according to a given column address. The data is written in a memory cell on the intersection of the bit line and a word line selected according to a given row address.
The above-mentioned elements
31
to
38
are under control of a control circuit
39
. The control circuit
39
is externally supplied with a row address strobe signal /RAS, a column address strobe signal /CAS, and a write enable signal /WE. Note that an inverted signal expressed by a signal name with an overline in
FIG. 1
(and
FIGS. 7
to
10
) will be expressed by attaching symbol “/” to the signal name in the specification.
In this type of DRAM, successive accesses for readout and writing (read/write) are mostly done to addresses near to each other. After completion of an access to a row address, the same row address is more likely to be accessed next. For this reason, when there arises no necessity to make an access to a different row address, a word line selected according to a row address is kept active so that the subsequent accesses can be made by selecting a column address only. A faster access is thereby attained.
In order to use such a function more effectively, a recent memory controller controls a block of a predetermined size (one word line) to be kept active even after data in the block was accessed according to a given address, so as to be able to respond faster when the same block is successively accessed. The unit size for such a block is called “page”, and there is a case that a DRAM utilizing this function is called “fast page DRAM”.
On the other hand, it is a common practice for recent computer systems to insert a cache memory, which is composed of memory elements faster than those of a main memory, between a CPU and the main memory for the reason that data once accessed is more likely to be accessed again in the near future. More specifically, once accessed data in the main memory is registered in the cache memory, and, when the same data is accessed next, it is read out from not the main memory but the cache memory. The access speed to the main memory is thereby apparently increased.
In this computer system with the cache memory, when an access request to data in the main memory is issued, the cache memory is first referred to. If the requested data is present in the cache memory (cache hit), the data is immediately transferred to a CPU. If the requested data is not present in the cache memory (cache miss), a block of an appropriate size including the requested data is read out from the main memory, and stored in the cache memory. At this time, if no empty block is available in the cache memory, a block that is least likely to be used again is selected and replaced by the new data.
Cache memories are roughly classified into store-through type and store-back type. In the store-through type, when the cache contents are rewritten, the main memory contents are always rewritten accordingly, so that the latest data are surely stored also in the main memory. Contrastingly in the store-back type, only the cache contents are rewritten, and, when a block is to be re-assigned due to a cache miss, the latest contents of the cache memory is written back to the main memory. In case of the store-back type, there is therefore a case that the contents of the cache memory differ from those of the corresponding part of the main memory.
In the store-back type, the area in the cache memory where only cache contents have been rewritten is called “dirty entry”. When blocks are re-assigned, as to a block including no dirty entry, the corresponding block can be simply loaded from the main memory. As to a block including a dirty entry, however, its contents must be written out to the corresponding block in the main memory, and then another block in the main memory is assigned to the block in the cache memory. Such an operation is called “replacement of dirty entry”.
In recent years, as CPUs become faster and cache capacities become larger, the store-through type that must frequently access a main memory is being replaced by the store-back type that must less frequently access the main memory. This is because the access speed to a memory is often considered an important factor of the performance of a data processing system.
FIG. 2
shows a schematic arrangement of a cache memory. As shown in
FIG. 2
, the cache memory generally comprises a cache (data area)
41
for storing some data stored in a main memory, and a tag memory (tag area)
42
for storing a part of the address (tag) on the main memory corresponding to each of the data stored (as entries) in the cache
41
.
Since the cache
41
has a smaller capacity than the main memory, the addresses corresponding to respective entries in the cache
41
are registered in the tag memory
42
. The address of data requested by an access request from a CPU is compared with each of the registered addresses in the tag memory
42
. A cache hit or miss is determined by judging whether or not the address of the requested data coincides with one of registered addresses in the tag memory
42
, i.e., whether the requested data is present in the cache
41
or not.
In this case, however, huge-size hardware is required if the address of the requested data is straightly compared with all of the entries in the cache
41
, i.e., all of the tags in the tag memory
42
. For this reason, the following scheme (set associative memory scheme) is used in general. Entries having lower bits equal to those (INDEX) of the address attendant upon the access request are selected from among all entries in the cache
41
, and then the address is compared with the tags of only the selected entries in a comparator
43
. Using the lower bits of the address attendant upon the access request

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Memory controller and a cache for accessing a main memory,... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Memory controller and a cache for accessing a main memory,..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Memory controller and a cache for accessing a main memory,... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3094302

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.