Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories
Reexamination Certificate
1998-11-06
2001-04-17
Nguyen, Hiep T. (Department: 2187)
Electrical computers and digital processing systems: memory
Storage accessing and control
Hierarchical memories
C711S127000, C710S023000, C710S027000, C710S039000
Reexamination Certificate
active
06219759
ABSTRACT:
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to a cache memory system.
2. Description of the Related Art
As the microprocessor increases its speed, hierarchically-structured cache memory has become popular to speed up access to memory. When data to be accessed is not in the cache memory of a cache memory system, an attempt to get data from cache memory results in a miss hit and data is transferred from main memory to cache memory. Therefore, when desired data is not in cache memory, the processor must suspend its processing until the transfer of data from main memory to cache memory is finished, decreasing the processing capacity.
To increase the cache memory it ratio, various methods have been proposed. For example, Japanese Patent Publication Kokai JP-A No. Hei 4-190438 discloses a method in which a program execution flow is read ahead to bring data, which will be used at branch addresses, into cache memory in advance.
FIG. 3
is a block diagram showing the configuration of conventional cache memory. As shown in
FIG. 3
, a central processing unit CPU
301
is connected to cache memory CM
303
and to a cache controller CMC
304
via an address bus
311
and a data bus
321
. A sub-processing unit SPU
302
is connected to the cache controller CMC
304
via an address bus
312
and the data bus
321
, and to the cache memory CM
303
via the data bus
321
. The sub-processing unit
302
monitors instructions to be sent to the central processing unit CPU
301
via the data bus
321
. Upon detecting a cache update instruction the compiler automatically inserted before a jump instruction, the sub-processing unit
302
tells the cache controller CMC
304
to update cache memory. The cache controller CMC
304
itself does not update cache memory; instead, the cache controller CMC
304
passes update address information to a DMA controller
305
and causes it to start transferring data from main memory
306
to a location in cache memory
303
indicated by the address information. This cache update instruction, meaningless to the central processing unit CPU
301
, is ignored. After that, when control is passed to the jump instruction, no hit miss occurs because data has already been sent from main memory
306
to cache memory CM
303
.
Another proposed method is that the sub-processing unit SPU
302
fetches an instruction which is several instructions ahead of the current instruction to find a miss hit in advance and to cause the cache controller CMC
304
to update cache memory.
A general mechanism of cache memory is described for example, in “Computer Configuration and Design” (Nikkei BP).
However, the prior art described above has the following problems.
The first problem is that the system according to the prior art requires hardware specifically designed to monitor programs, resulting in a large-sized circuit.
The second problem is that reading an instruction that is several instructions ahead of the current instruction requires the memory to have two or more ports. Normally, memory with two or more ports is large.
The third problem is that, because the update instruction is inserted automatically by a compiler into a location that is several instructions ahead of the current instruction, the cache memory update start time cannot be set freely. Therefore, even when it is found that it takes longer to update cache memory because of an increase in the cache memory block size or in the main memory access time, cache memory updating cannot be started at an earlier time. This sometimes results in cache memory updating not being completed within a predetermined period of time.
The fourth problem is that the method of automatically inserting a jump instruction, through the use of a compiler, into a location several instructions ahead of the current instruction requires the compiler to have that function built-in, increasing the development cost of development tools such as a compiler.
SUMMARY OF THE INVENTION
The present invention seeks to solve the problems associated with the prior art described above. It is an object of the present invention to provide a cache memory system which is based on the configuration of the configured main memory or cache memory and which updates cache memory efficiently without an additional compiler function or without a special device for monitoring instructions.
To achieve the above object, the present invention provides a cache memory system comprising a memory composed of a plurality of banks, a cache controller sending a cache update instruction to a Direct Memory Access DMA controller as directed by a central processing unit, and the DMA controller which transfers data from main memory to cache memory according to the instruction received from said cache controller.
REFERENCES:
patent: 5276852 (1994-01-01), Callander et al.
patent: 5499353 (1996-03-01), Kadlec et al.
patent: 5802569 (1998-09-01), Genduso et al.
patent: 5822616 (1998-10-01), Hirooka
patent: 6006317 (1999-12-01), Ramagopal et al.
patent: 6012106 (2000-01-01), Schumann et al.
patent: 4-190438 (1992-07-01), None
patent: WO 96/12229 (1996-04-01), None
“Computer Configuration and Design”, (Nikkei BP), (4/96), pp 416-423.
Foley & Lardner
NEC Corporation
Nguyen Hiep T.
LandOfFree
Cache memory system does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Cache memory system, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Cache memory system will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2527497