Static information storage and retrieval – Read/write circuit – Having particular data buffer or latch
Reexamination Certificate
2000-10-11
2003-03-25
Lebentritt, Michael S. (Department: 2824)
Static information storage and retrieval
Read/write circuit
Having particular data buffer or latch
C365S189080, C365S205000, C365S207000
Reexamination Certificate
active
06538928
ABSTRACT:
BACKGROUND OF THE INVENTION
1. Field of the Invention
This invention relates to improvements in memory architectures and methods for operating same, and more particularly, to improvements in memory architectures and operations thereof for improved data transfers between the memory array and a cache memory associated therewith, and still more particularly to improvements in systems and methods for sharing sense amplifiers in a memory architecture that enables the width of a global data bus (e.g., implemented using a global metalization layer) to be reduced in size by multiplexing data from the shared sense amplifiers to the centralized cache during subsequent clock cycles.
2. Relevant Background
Today, in memory architectures, in general, and DRAM architectures, in particular, one physical circuit layout that has been suggested includes sets of sense amplifiers alternating with memory array blocks serviced by the sense amplifiers. See, for example, U.S. Pat. No. 5,887,272, which is assigned to the assignee hereof, and which is incorporated herein by reference. The sense amplifiers are arranged in stripes between adjacent DRAM array blocks. Each sense amplifier stripe may be connected to selectively service the DRAM cells on both sides of the stripe. Thus, the sense amplifiers in a particular sense amplifier stripe may be selectively connected to selected memory cells on either the memory array located on left of the stripe, or to selected memory cells located on the right of the stripe.
One trend in the design of memory devices is to increase their access speeds. In this regard, it has been proposed to include cache memory elements into which the contents of the DRAM array are temporarily written prior to being delivered to the output of the memory. The cache memory serves to hide the overhead associated with the DRAM array by allowing the data access to occur while the precharge and next activation of the array is underway. This effectively speeds up the overall data rate by eliminating otherwise dead periods. As used herein, the terms “cache” or “cache memory” are used to refer to a data latch or other suitable circuit that can temporarily hold the data as it is read from the DRAM array prior to being delivered to the output of the memory. Thus, in the past, when an element from the memory array is read, it is detected by a sense amplifier that is associated with the memory cell being read, then delivered from the sense amplifier to the cache memory element that is at least temporarily associated therewith.
Additionally, memory arrays are becoming more and more dense. For example, DRAM designers are under constant tension to design DRAM circuits more densely, but at the same time, to include larger amounts of functionality in the circuit. One of the techniques that integrated circuit manufacturers have used to address these problems is to place greater and greater emphasis on multi-layered structures. For example, above the active regions of the device, one or more layers of interconnecting metal or other conducting material, such as polysilicon, or the like, may be used. However, as the number of the layers increases, the planarity of the surface on which subsequent layers are formed becomes increasingly uneven. As a result, the overlying or subsequently formed structures have a tendency to be susceptible to discontinuities, due to step-like structures that form at the surface. As a result, the pitch of the interconnect structures generally cannot be designed at too low a level. (The pitch of an interconnect is regarded as the distance between an interconnect structure and its closest neighbor, plus the dimension of the interconnect itself.)
One of the problems that has been encountered is in the interconnect structure between the cache array elements and the respective sense amplifiers. Since the interconnect must traverse at least a portion of the surface overlying the memory array, in modern DRAMs, typically only a few cache elements are provided with a respective few sense amplifiers, so that only a portion of a row, for example, of the DRAM array is read out. This requires a number of memory access cycles to be preformed in order to read out the plurality of memory array locations desired. An alternative structure that has been proposed is to provide cache memory elements in close proximity to the sense amplifiers.
However, typically, the cache memory elements are arranged physically in a stripe that is associated both with the DRAM array and with the sense amplifier stripes in which the circuit is laid out. One problem with the practical implementation of a centralized cache shared between multiple sense amplifier “bands” or stripes is caused by the previous inability to implement the global busses required to connect the sense amplifiers to the cache bits on the DRAM sense amplifier pitch. For example, in a memory that has an 8K bit page size, a global bus having 16K lines must be provided, one line per bit and one line per bit compliment. This is prohibitively large. That is, for the sense amplifiers to be closely packed to enable global lines to be routed from the sense amplifiers to the centralized cache, practically speaking, the sense amplifiers needed to be shared among the global bus lines. This constraint results in the cache having fewer bits than the number of bits held in an active set of sense amplifiers.
Furthermore, each time that the DRAM array is accessed, it must be precharged. Precharging the DRAM array erases any previously contained information in the sense amplifiers. Thus, the reading out of a single row of DRAM memory may entail several cycles of precharge and reading, due to the limited number of bus traces or lines that can practically be used to interconnect the sense amplifiers and cache elements.
What is needed, therefore, is a memory architecture and method of operation that enables the memory to be operated with a plurality of shared sense amplifiers and a centralized cache in which a global bus connects the sense amplifiers and the centralized cache, but the number of bits simultaneously transferred is less than the number of sense amplifiers, and the memory array needs to be precharged only once for the entire data transfer.
SUMMARY OF THE INVENTION
In light of the above, therefore, it is an object of the invention to provide a DRAM integrated circuit in which a number of bus lines that interconnect the sense amplifiers and cache elements associated therewith can be reduced.
It is another object of the invention to provide a DRAM array in which a row, or other portion, of the DRAM can be read and transferred to a cache memory with only a single DRAM array precharge being required.
It is another object of the invention to provide a method for operating a DRAM for transferring data from the memory cells of the DRAM array to a cache memory without requiring subsequent DRAM array precharge after the first.
It is another object of the invention to provide a memory architecture that enables a smaller amount of chip space to be used, while enabling rapid memory read accesses.
These and other objects, features and advantages of the invention will be apparent to those skilled in the art from the following detailed description of the invention, when read in conjunction with the accompanying drawings and appended claims.
In accordance with a broad aspect of the invention, a memory architecture is presented that uses shared sense amplifiers and a centralized cache, which contains M bits. A global bus, which includes n bits, connects the sense amplifiers and the centralized cache. In operation, n<M bits are transferred in M
cycles to the centralized cache. The ratio n:M may be, for example, 1:2, 1:3, or other convenient ratio.
According to another broad aspect of the invention, a memory is presented that includes a memory array of cells having stored memory contents. A first sense amplifier array receives memory contents of a selectively associated first plurality of cells in the memory array. A second sense amplifier array receives memory con
Enhanced Memory Systems Inc.
Hogan & Hartson L.L.P.
Lebentritt Michael S.
Nguyen Hien
LandOfFree
Method for reducing the width of a global data bus in a... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Method for reducing the width of a global data bus in a..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method for reducing the width of a global data bus in a... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3082559