Dynamic type memory

Static information storage and retrieval – Addressing – Plural blocks or banks

Reissue Patent

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C365S230020, C365S230080, C365S189050, C365S189020

Reissue Patent

active

RE037427

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to a semiconductor memory device and, more specifically, to a dynamic type memory or a dynamic RAM (DRAM) capable of transferring data at high speed through an input/output path.
2. Description of the Related Art
In a dynamic type memory, a divided cell array operating system is employed wherein a memory cell array is divided into a plurality of cell arrays (sub arrays) and some of the cell arrays are operated at the same time. This system makes it possible to reduce a charge/discharge current of bit lines which occupies a large part of the consumed current in an operation of rows. The number of sub arrays has a close relation to the operation speed of the memory. If each sub array is large in size, the capacity of word lines is increased too much and thus the rise and fall speeds of the word lines are decreased. Since the capacity of bit lines is also increased too much, a difference in potential between a pair of bit lines is lessened, and the speed at which the potential difference is amplified by a sense amplifier becomes slow, with the result that the operation speed of the entire memory is decreased. For this reason, as the memory is miniaturized and its capacity is increased, the number of sub arrays is likely to increase in order to reduce the charge/discharge current of the bit lines and then prevent the operation speed of the entire memory from lowering.
The semiconductor chip of a conventional versatile DRAM is applicable to a variety of bit configurations such as 1-bit, 4-bit, 8-bit, and 16-bit configurations and various types of packaging such as DIP, SOJ, TSOP and ZIP. For this reason, as shown in
FIG. 4
, a DQ buffer
43
for amplifying data of a data line
42
is provided in the vicinity of each of sub arrays
41
on the semiconductor chip, data of all the DQ buffers
43
are concentrated in a single multiplexer
44
arranged on the chip (in the center of the chip in FIG.
4
), and data having a bit configuration is supplied from the multiplexer to an I/O pad
45
of its corresponding packaging.
According to the above conventional technique wherein all data read out from the sub arrays, which tend to increase in size, are concentrated on the chip, the data paths formed in the chip are lengthened to prevent data from being transferred at high speed.
In a specified DRAM chip, by concentrating the I/O pads on one side of the chip or using a vertical surface mounting package (VSMP) capable of being vertically mounted on a memory mounting printed circuit board, the lead frame in the package and the wires on the circuit board are shortened to increase in data transfer speed and, at the same time, to improve in data transfer rate by adopting a multi-bit configuration such as 8-bit and 16-bit configurations.
A dynamic RAM (DRAM) is achieved at low cost as a memory which is employed in bulk in a computer system. In the field of computers, the operation speed of a microprocessor (MPU) is remarkably improved and thus becomes higher and higher than that of the DRAM. The improvement in speed of data transfer between the MPU and DRAM is an important factor in increasing the processing speed of the total computer system. Various improvements have been made to increase the data transfer speed, and a typical one of them is to adopt a high-speed memory or a cache memory. The memory, which is interposed between the MPU and the main memory to shorten the difference between the cycle time of the MPU and the access time of the main memory, improves in efficiency in use of the MPU.
As examples of the cache memory, there are a static RAM (SRAM) of a chip separated from both a MPU chip and a DRAM chip, an SRAM called an on-chip cache memory or an embedded memory mounted on an MPU chip (an MPU chip mounted with a cache memory may have an SRAM cache memory of another chip), and an SRAM cell mounted on a DRAM chip.
The technique of mounting a cache memory including SRAM cells on a DRAM chip, is disclosed in “A Circuit Design of Intelligent CDDRAM with Automatic Write Back Capability,” 1990 Symposium on VLSI Circuits, Digest of Technical Papers, pp 79-80. According to this technique, an SRAM cell is added to each column of a DRAM using cells each having one transistor and one capacitor, and this SRAM cell is employed as a cache memory. Moreover, when data of an address to be read out is not stored in the cache memory (mishit), the data of the cache memory is written back to a DRAM cell corresponding to the address, and then data stored in a DRAM cell of an address to be accessed are read out into the cache memory. This cache memory mounted DRAM can be employed together with a cache memory mounted MPU. The technique of using sense amplifiers of bit lines of a DRAM as cache memories is disclosed in Japanese Patent Application No. 3-41316 (Jpn. Pat. Appln. KOKAI Publication No. 4-212780) whose applicant is the same as that of the present application. A specific constitution of the cache memories and a specific control operation thereof are disclosed in Japanese Patent Application No. 3-41315 whose applicant is also the same as that of the present application.
Furthermore, Japanese Patent Application No. 4-131095, the applicant of which is the same as that of the present application, proposes a DRAM wherein a memory region is divided into a plurality of sub arrays, the sub arrays are operated independently of one another, and sense amplifiers of bit lines are employed as cache memories, thereby enhancing the hit rate of the cache memories.
Since, in this DRAM, a sense amplifier holds data read out from a row corresponding to each of different addresses for each of the sub arrays, a hit possibility of requesting access to a selected row can be increased, and the average of data access time, which depends on both the hit possibility and mishit possibility of not requesting the access, can be reduced.
A cache memory system using sense amplifiers will now be described in brief. Assume that a DRAM stands by for access from an MPU and, in this case, data read out from memory cells of a row address is latched in the sense amplifiers.
If there is access to the row address, data of whose memory cells is latched in the sense amplifiers (hit), the data can be output only by the operation of columns without that of rows, and access time necessary for the operation of rows can be shortened accordingly.
In contrast, if there is access to a row address, data of whose memory cells is not latched in the sense amplifiers (mishit), it is necessary that the data of the sense amplifiers is written back to the memory cells (or the sense amplifiers are equalized), and then data of a new row address be latched in the sense amplifiers. In this mishit case, the access time is much longer than when no cache memory system is employed.
If the hit rate of the cache memories is low, the average access time of the system is lengthened. To increase the hit rate is therefore important for shortening the average access time of the system.
In order to enhance the foregoing hit rate, there is a first method of increasing the capacity of each of the cache memories or a second method of dividing the cache memories into some banks.
If the first method is applied to the cache memory system using sense amplifiers, the sense amplifiers, which stand by for access while latching data, are increased in number. Generally, as described above, a large-capacity memory performs partial activation of activating some of sub arrays at the same time and, in this case, no data is usually held in the sense amplifiers related to the sub arrays in which an operation of rows is not performed. If, however, these sense amplifiers are caused to latch data, the sense amplifiers standing by for access while latching data, can be increased in number, as can be the capacity of the cache memories, thereby enhancing the hit rate.
If the above second method is applied to the cache memory system using sense amplifiers, these sense amplifiers are divided into a plurality of banks

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Dynamic type memory does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Dynamic type memory, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Dynamic type memory will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2616001

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.