Method and apparatus for high-speed read operation in...

Static information storage and retrieval – Addressing – Plural blocks or banks

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C365S189040, C365S220000

Reexamination Certificate

active

06628562

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to speeding up a semiconductor memory.
2. Description of the Related Art
FIG. 1
shows a memory core of a semiconductor memory such as a DRAM. A memory core (also referred to as a block or a bank) has a plurality of memory cells MC which are arranged in a matrix. These memory cells MC are respectively connected to word lines WL
0
, WL
1
, WL
2
, . . . which are laid in the horizontal direction of the diagram and bit line pairs BL
0
-/BL
0
, BL
1
-/BL
1
, . . . which are laid in the vertical direction of the diagram. The bit line pairs BL
0
-/BL
0
, BL
1
-/BL
1
, . . . are connected to respective sense amplifiers SA.
In a read operation of the semiconductor memory of this type, a word line is selected to turn on the transfer transistors of memory cells MC so that the data of the memory cells MC is read to bit line pairs. The read data is amplified by the sense amplifiers SA and output to exterior. Then, the bit line pairs are precharged (equalized) to complete the read operation.
For example, the pieces of data read from the memory cells MC that are shown with thick frames in the diagram are transmitted through the bit line pair BL
1
,/BL
1
to a sense amplifier SA. That is, the bit line pair BL
1
,/BL
1
is shared among these memory cells MC. In this example, the memory cells MC that are connected to the bit line pair BL
1
,/BL
1
retain “0 data,” “1 data,” “0 data,” and “0 data,” starting from the top in the diagram.
FIG. 2
shows read operations of the DRAM described above. When the word line WL
0
shown in
FIG. 1
is selected, data is read to bit line BL
1
from the memory cell MC connected to the word line WL
0
. This lowers the voltage of the bit line BL
1
(FIG.
2
(
a
)). Then, the sense amplifier SA operates to amplify the voltage difference in the bit line pair BL
1
,/BL
1
(FIG.
2
(
b
)). After the read of “0 data,” the bit line pair BL
1
,/BL
1
is precharged to complete the read cycle (FIG.
2
(
c
)).
If the word line WL
1
is selected during the selection of the word line WL
0
data is read to the bit line BL
1
from the memory cell MC that retains “1 data” (FIG.
2
(
d
)). Here, since the voltage of the bit line BL
1
has been amplified to low level, a data crash occurs in the memory cell MC that retains “1 data.” If data is read from the memory cells MC that are connected to the bit line /BL
1
and retain “0 data,” a data crash also occurs in these memory cells MC (FIG.
2
(
e
)).
As described above, the simultaneous activation of a plurality of word lines within a memory core causes a data crash. On this account, it has been impossible to perform read operations on a plurality of memory cells MC that are connected to an identical bit line, at intervals shorter than a cycle time. In other words, the interval of requests for read operations on a single memory core has had to be greater than or equal to the read cycle (cycle time).
The foregoing problem is an obstacle to the high-speed operation of semiconductor memories, hindering improvements in data read rate. In particular, because of requiring a precharging time and being often provided with bit lines of greater lengths for the sake of a reduction in memory core area, DRAMs have longer cycle times as compared to SRAMs and the like. Thus, the foregoing problem is of seriousness.
SUMMARY OF THE INVENTION
It is an object of the present invention to provide a semiconductor memory capable of operating at high speed for improvement in data read rate.
According to one of the aspects of the present invention, data is stored in a plurality of first memory blocks, and regeneration data for generating the data stored in the first memory blocks is stored in a second memory block. In a read operation, either a first operation or a second operation is performed to read the data. In the first operation, data is read directly from a selected first memory block among the plurality of first memory blocks. In the second operation, the selected first memory block does not operate, and data is regenerated from the data stored in unselected first memory blocks and the regeneration data stored in the second memory block.
Thus, performing either the first operations or the second operations in parallel allows the data in a first memory block to be read while this first memory block is reading data. Accordingly, requests for the read operations made from the exterior of the memory can be received at an interval shorter than a read cycle neccesary for the first memory blocks to perform a signle read operation. As a result, the semiconductor can operate at high speed, with an improvement in data read rate.
In a write operation, for example, data is written to a selected first memory block among the plurality of first memory blocks. At the same time, regeneration data for regenerating the data stored in the first memory block is written to the second memory block.
Acccording to another aspect of the present invention, in the second memory block a parity bit of the first memory blocks is stored as the regeneration data. Since the regeneration data for regenerating each memory cell of the first memory blocks can be configured in a single bit, it is possible to minimize the storage capacity of the second memory block. Therefore, the second memory block can be shrunk in layout size, with a reduction in the chip size of the semiconductor memory.
According to another aspect of the present invention, the semiconductor memory includes a plurality of memory block groups each composed of a predetermined number (except “1”) of first memory blocks among the plurality of first memory blocks and any of a plurality of second memory blocks. Each of the first memory blocks belongs to a plurality of memory block groups. A plurality of the first memory blocks belonging to one of the memory block groups do not belong to the other memory block groups together.
The memory block groups can be configured easily by, for example, arranging the first memory blocks in matrix and assigning the second memory blocks corresponding to the plurality of first memory blocks which align in a horizontal direction and a vertical direction, respectively (memory block groups of two-dimensional configuration). Here, the memory block groups are identified by an address signal. The bits of addresses of the first memory blocks belonging to one of the memory block groups partly have the same value. Since the first and second memory blocks can be arranged by simple rules, the layout design is fascilitated. This can prevent complicating the wiring which interconnects the first and second memory blocks, and reduce the layout size necessary for the wiring. As a result, the semiconductor memory can be made smaller in chip size. Besides, a reduction in the wiring length mentioned above makes it possible to operate the first and second memory blocks at higher speed.
Since requests for the read operations from the exterior of the memory are received at an interval shorter than the read cycle of the first memory blocks, and at least either the first operations or the second operations are performed in parallel on a plurality of memory block groups, it is possible to reduce the interval at which the requests are received (cycle time).
According to another aspect of the present invention, the semiconductor memory includes a plurality of flag circuits and a block selecting circuit. The flag circuits indicate the operating states of the first and second memory blocks, respectively. Since the block selecting circuit has only to select at least either the first or second memory blocks in accordance with the outputs of the flag circuits, and an address signal, it can be made smaller in circuit scale. The flag circuits, for example, change the operating states to “operative” in response to the output of memory block selecting signals corresponding to the flag circuits, the block selecting signals being output from the block selecting circuit. They change the operating states to “inoperative” a predetermined time

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Method and apparatus for high-speed read operation in... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Method and apparatus for high-speed read operation in..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and apparatus for high-speed read operation in... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3110137

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.