Static information storage and retrieval – Addressing – Plural blocks or banks
Reexamination Certificate
2002-07-03
2004-03-30
Le, Thong Q. (Department: 2818)
Static information storage and retrieval
Addressing
Plural blocks or banks
C365S222000
Reexamination Certificate
active
06714477
ABSTRACT:
BACKGROUND OF THE INVENTION
The present invention relates to a semiconductor integrated circuit having memory blocks; and, the invention relates more particularly to a technique for improving the throughput of a data read operation invoked in response to a read access request, which is useful for the application to a semiconductor integrated circuit used as a cache memory, including DRAMs mounted along with logic circuits.
A memory hierarchy of a storage device, when viewed in terms of the temporal and spatial locality of an information reference, typically comprises memories of a plurality of levels having different access speeds and capacities. Typically, a main memory is provided in the form of a DRAM (Dynamic Random Access Memory) having a low per-bit cost; and, in a memory level closer to the processor or CPU (Central Processing unit), there is a cache memory comprising a SRAM (Static Random Access Memory), or the like. A cache memory is a memory for holding data that has been temporally or spatially localized for data recently used by the processor to provide an improved throughput that is better than the throughput of a data read action obtained from a lower level memory.
After the completion of the present invention, the inventor of the present invention became aware of the presence of Japanese laid-open patents JP-A-2-297791 and JP-A-6-195261. The descriptions provided in these specifications are directed to a dynamic-type memory (DRAM) and a static-type memory (SRAM) on a single chip semiconductor substrate, and to the use of the DRAM and SRAM together as a cache memory. However, the objects and the configuration thereof are not described in those specifications.
SUMMARY OF THE INVENTION
The present inventor considered the possibility of mounting a large number of DRAM modules having a relatively low access speed along with logic circuits, and using the arrangement as a cache memory. The discussion included for example, a semiconductor integrated circuit-mounted with DRAM modules, which can be used as a level
3
(L
3
) cache memory for a microprocessor to which level
1
(L
1
) and level
2
(L
2
) cache memories are built in.
According to the investigation by the present inventor, when an attempt is made to reduce an apparent memory read cycle by mounting a large number of DRAM modules together and making them capable of parallel operation, consideration has to be given to providing some way of preventing competition among data output actions caused by the parallel operation. In such a case, when a data buffer is employed in order to prevent data competition, it has been found that it is inefficient in performing data buffering where there is no data competition.
When the data processing efficiency of a processor is considered, the most significant object would be the improvement in the throughput of read operations invoked in response to a read access by the processor. Here, a read operation of a cache memory may sometimes involve a copy-back operation (or write-back) necessitated by a write access by the processor, and such a read operation would not be required to have a high throughput in most cases. That is so because the copy-back operation is an operation for accommodating data into a main memory for replacing a dirty cache line in the case of a cache miss. Accordingly, it has been found by the present inventor that, when considering the use of the invention as a cache memory, it is necessary to avoid an excessive expansion of the logic scale of the logic circuitry differentially weighting the importance of the improvement in the throughputs of read data according to the purposes of the read data.
For a write access by a processor, there is not much significance in accelerating a write operation which has occurred in response to a write access request; however, when the data processing efficiency of the processor is of concern, it is necessary to allow the processor to be released from the write operation within a short period of time after the reception of the write access request. Especially, in the case of a DRAM, a refreshing action of the stored data is required at every refreshing interval, and the reception of the write access request should not be delayed by such a refreshing action.
An object of the present invention is to provide a semiconductor integrated circuit having a configuration in which data buffers are employed for avoiding data competition caused by the parallel operation of plural memory blocks thereby improving the throughput of read operations.
Another object of the present invention is to provide a semiconductor integrated circuit which can improve the throughput of read operations without entailing excessive expansion in the logic scale of its logic circuitry.
Still another object of the present invention is to provide a semiconductor integrated circuit which can readily accept write access requests regardless of the internal memory operation state.
The above and further objects and novel features of present invention will be more clearly understood by reading the detailed description of the present invention in conjunction with the attached figures.
The following briefly sets forth a summary of representative embodiments of the present invention among those covered herein.
[1] In order to avoid data competition caused by the parallel operation of plural memory blocks, read buffers are employed to improve the throughput of read operations. To this end, a semiconductor integrated circuit has a configuration comprising a plurality of memory blocks (BNK
0
-BNK
7
) capable of parallel operation, an external interface means (I/F
1
) capable of externally inputting write data and externally outputting read data, read buffers (RB
0
-RB
3
), each capable of retaining read data read out from a memory block in response to an external output-incapable state in which the read data cannot be externally outputted from the external interface means, and selecting means for selecting either read data read out from a memory block or read data read out from a read buffer and for feeding it to the external interface means, while the external output-incapable state is not present.
According to the above configuration, if a read operation is performed from one of the memory blocks that are capable of parallel operation while read data from another memory block is being externally outputted from the external interface means, this read data would cause a resource competition at the point of its external output, so that it is temporarily stored in a read buffer, and then the read data is enabled for external output from the read buffer after the prior data outputting action terminates. Therefore, even if there is a read access request that would cause resource competition during the read data output operation, a read operation may be started without having the later request wait, and this read data may be externally outputted as soon as the risk of the resource competition is resolved; thus, the throughput of the read data outputting operations may be improved.
If there is no resource competition when data is read out from a memory block, the read data is externally outputted directly from the external interface means without the intervention of a read buffer, so that useless temporary buffering of the data may be avoided when there is no data competition; and, in this point, the present invention contributes to the improvement in the throughput of the read data outputting operations.
A read buffer may be constituted by a memory having a smaller capacity and a higher speed than that of the memory blocks. For example, when the memory blocks are formed by DRAM modules, then the read buffers may be constituted by SRAM modules.
When the above configuration is viewed in terms of control, the semiconductor integrated circuit comprises a plurality of memory blocks (BNK
0
~BNK
7
) capable of parallel operation, read buffers (RB
0
-RB
3
) capable of holding read data read out from the aforementioned memory blocks, an external interface means (I/F
1
) capa
Kobayashi Toru
Kume Masaji
Miyaoka Shuichi
Nakayama Michiaki
Sakakibara Hideki
LandOfFree
Semiconductor integrated circuit device with memory blocks... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Semiconductor integrated circuit device with memory blocks..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Semiconductor integrated circuit device with memory blocks... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3223092