Static information storage and retrieval – Floating gate – Particular biasing
Reexamination Certificate
2002-08-22
2004-08-03
Ho, Hoai (Department: 2818)
Static information storage and retrieval
Floating gate
Particular biasing
C365S185170, C365S203000
Reexamination Certificate
active
06771543
ABSTRACT:
TECHNICAL FIELD
The present claimed invention generally relates to an array of memory cells. More specifically, the present claimed invention relates to virtual ground architecture memory arrays.
BACKGROUND ART
The architecture of a typical memory array is known in the art. Generally, a memory array includes a number of lines arranged as rows and columns. The rows of the array are commonly referred to as word lines and the columns as bit lines, although it is understood that such terminology is relative.
The word lines and bit lines overlap at what can be referred to as nodes. Situated at or near each node is a memory cell, which is generally some type of transistor. In a virtual ground architecture, a bit line can serve as either a source or drain line for the transistor (memory cell), depending on which memory cell is being program verified or read. For simplicity of discussion, a “read” can refer to either a read operation or a program verification operation.
When reading a selected memory cell, a core voltage is applied to the word line corresponding to that cell, and the bit line corresponding to that cell is connected to a load (e.g., a cascode or cascode amplifier). Because of the architecture of the memory array, all of the memory cells on the word line are subject to the core voltage. This can induce a leakage current along the word line, in effect causing an unwanted interaction between the memory cells on the word line. The leakage current, if of sufficient magnitude, can slow down the read and also cause errors in reading the selected memory cell.
To minimize the interaction among memory cells on a word line and to speed up the read, a technique commonly referred to as precharging is used. Precharging works by charging (applying an electrical load) to the node next to the node that corresponds to the memory cell being read. Specifically, the node next to (and on the same word line) as the drain node of the selected memory cell is precharged. If the drain node and the precharge node are at about the same voltage, then the precharge has the effect of reducing the leakage current.
A problem with precharging is that it is difficult to predict the voltage that needs to be applied to the precharge node. It is important to apply an appropriate precharge voltage because, if the precharge voltage is set too high or too low, the memory cell may not be properly read. However, there are many factors that can influence the amount of leakage current and hence the amount of voltage that should be applied to the precharge node. These factors include variations in temperature and in the supply voltage. In addition, a relatively new memory architecture, referred to as a mirror bit architecture, is coming into use. In a contemporary mirror bit architecture, two bits can be stored per memory cell, as opposed to the single bit that is conventionally stored in a memory cell. The pattern of bits (e.g., 00, 01, 10 or 11) stored in a mirror bit memory cell can also influence the amount of leakage current. Thus, estimating the proper amount of precharge voltage can be difficult and may be even more difficult for mirror bit architectures.
In summary, reading memory cells according to prior art techniques can be problematic if the precharge voltage is not properly selected; however, selecting the proper precharge voltage is difficult because of the factors involved. Accordingly, a technique for reading memory cells that addresses the problems of the prior art would be useful.
DISCLOSURE OF THE INVENTION
A method of reading a memory cell, and a memory array using the method, are described in various embodiments. In one embodiment, an electrical load is applied to a first node (or bit line) in a memory array, the first node corresponding to a memory cell. A second node (or bit line) in the memory array, the second node on a same word line as the first node, is precharged. The second node is separated from the first node by at least one intervening node in the same word line. In one embodiment, the second node is in the range of two to five nodes from the first node.
In one embodiment, the memory cell utilizes a mirror bit architecture wherein two bits of data are stored in the memory cell.
In another embodiment, a third node in the memory array is precharged, so that multiple nodes on the word line are precharged.
REFERENCES:
patent: 6081456 (2000-06-01), Dadashev
patent: 6088265 (2000-07-01), Ohta
patent: 6487124 (2002-11-01), Semi
Chen Pau-Ling
Chung Michael S.
Wong Keith
LandOfFree
Precharging scheme for reading a memory cell does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Precharging scheme for reading a memory cell, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Precharging scheme for reading a memory cell will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3311932