Method and apparatus for accelerating signal equalization...

Static information storage and retrieval – Read/write circuit – Complementing/balancing

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C365S203000

Reexamination Certificate

active

06785176

ABSTRACT:

The present invention relates generally to a system and method for improving bit line equalization in a semiconductor memory.
BACKGROUND OF THE INVENTION
Traditionally, designers of mass-produced or commodity dynamic random access memory (DRAM) devices have focused more on achieving a lower cost per bit through high aggregate bit density than on high memory performance. Typically, the low cost per bit has been achieved by designing DRAM architectures with sub-arrays as large as practically possible despite its strongly negative affect on the time required to perform bit line pre-charge and equalization, as well as cell read-out, sensing, and writing new values. The reason for the above designs is due to the fact that the cell capacity of a two-dimensional memory array increases quadratically with scaling, while the overhead area of support circuitry increases linearly with scaling. The support circuitry includes bit line sense amplifiers, word line drivers, and X and Y address decoders. Thus, a relatively small increase in overhead area provides a relatively large increase in cell capacity.
The bit line equalization and pre-charge portion of a DRAM row access cycle represents operational overhead that increases the average latency of memory operations and reduces the rate at which row accesses can be performed. Part of the difficulty in reducing this latency is due to typical DRAM architectures, which maximize memory capacity per unit area by favouring large DRAM cell arrays. Large DRAM cell arrays require long bit lines, which are highly capacitive. Thus, the bit lines require a relatively large amount of current to quickly change the voltage on them, as described in U.S. Pat. No. 5,623,446 issued to Hisada et al.
Hisada et al. describe a system for providing semiconductor memory with a booster circuit. The booster circuit boosts the voltage on the gates of the precharge and equalize devices during a selected portion of time in an attempt to decrease the precharge time. However, this approach requires higher power, which is undesirable for many applications.
At the same time, the width of large DRAM arrays requires the simultaneous pre-charge and equalization of thousands of bit lines. The large number of active bit lines limits the drive strength of pre-charge and equalization devices for individual bit line pairs. This is in order to avoid difficulties associated with large peak aggregate currents.
In contrast to commodity DRAM architectures, new DRAM architectures for embedded applications often focus on performance rather than the density. This is achieved by increasing the degree of subdivision of the overall memory into a larger number of sub-arrays. Smaller active sub-arrays permit the use of higher drive, faster pre-charge and equalization circuits than possible in commodity memory devices. A memory of such architecture is illustrated in U.S. Pat. No. 6,023,437 issued to Lee.
Lee describes a semiconductor device wherein the memory in segmented into components and adjacent memories share a sense amplifier. The semiconductor includes a blocking circuit for blocking bit lines associated with the memory component not in use. The semiconductor is capable of reducing the bit line precharge time, by improving the operation of the blocking circuits. However, this approach runs into fundamental limitations regarding how much the bit line equalization period can be shortened due to the distributed resistive and capacitive parasitic characteristics of the bit line material.
Latency impact of slow bit line equalization and pre-charge has traditionally been minimized by the creation of two different classes of memory operations. A first class comprises bank accesses. Bank accesses require full row or column access in order to access a memory location. A second class comprises page accesses. Page accesses are typically faster than bank accesses and only require column access to a row that has been left open from a previous bank operation. The efficacy of page accesses in reducing average latency is due to the statistical spatial locality in the memory access patterns of many computing and communication applications. That is, there is a strong probability that consecutive memory accesses will target the same row.
However, this architecture is undesirable for many applications such as real time control and digital signal processing that value deterministic, or at least minimum assured levels of memory performance regardless of the memory address access pattern. One solution is to perform a complete row and column access for every memory operation and automatically close the row at the end of the operation. Unfortunately, even the use of a highly subdivided, small sub-array DRAM architecture is performance limited by the distributed resistive-capacitive (RC) parasitic characteristics of the bit line material due to current DRAM design and layout practices.
Therefore, it is an object of the present invention to provide an equalization circuit that obviates or mitigates one or more of the above mentioned disadvantages.
SUMMARY OF THE INVENTION
In accordance with an embodiment of the present invention there is provided a circuit for equalizing a signal between a pair of bit lines. The circuit comprises a first equalizing element that is operatively coupled between the pair of bit lines for equalizing the signal, the first equalizing element being located proximate a first end of the pair of bit lines. The circuit further comprises a precharging element that is operatively coupled between the pair of bit lines for precharging the pair of bit lines to a precharge voltage, the precharging element being located proximate to the first equalizing element. The circuit also comprises a second equalizing element that is operatively coupled between the pair of bit lines for equalizing the signal, and located at a predetermined position along the bit lines. As a result of having multiple equalizing elements located along pairs of bit lines, the precharge and equalize function is performed faster than in conventional approaches.


REFERENCES:
patent: 5233558 (1993-08-01), Fujii et al.
patent: 5247482 (1993-09-01), Kim
patent: 5291433 (1994-03-01), Itoh
patent: 5349560 (1994-09-01), Suh et al.
patent: 5623446 (1997-04-01), Hisada
patent: 5673219 (1997-09-01), Hashimoto
patent: 5717645 (1998-02-01), Kengeri et al.
patent: 5757707 (1998-05-01), Abe
patent: 6023437 (2000-02-01), Lee
patent: 6166976 (2000-12-01), Ong
patent: 6278650 (2001-08-01), Kang
patent: 6307768 (2001-10-01), Zimmermann

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Method and apparatus for accelerating signal equalization... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Method and apparatus for accelerating signal equalization..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and apparatus for accelerating signal equalization... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3356193

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.