Electrical computers and digital processing systems: memory – Storage accessing and control – Specific memory composition
Reexamination Certificate
1997-03-28
2004-03-23
Kim, Matthew (Department: 2186)
Electrical computers and digital processing systems: memory
Storage accessing and control
Specific memory composition
C711S169000, C365S233500
Reexamination Certificate
active
06711648
ABSTRACT:
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to cost efficient methods and apparatus for increasing the data bandwidth associated with dynamic memory devices and, more particularly, to methods and apparatus for increasing the data bandwidth associated with memory devices, such as dynamic random access memory (DRAM) devices, to achieve pipelined nibble mode (PNM) operation. Such methods and apparatus may also find application in the realization of synchronous dynamic random access memory (SDRAM) or other memory devices.
2. Description of the Prior Art
It is generally known that a goal in the design of memory devices, such as DRAMs and SDRAMs, as well as related control circuitry, is to provide increased memory throughput, i.e., increased data bandwidth. It is also generally known that such an increase in data bandwidth may be substantially achieved by parallelizing memory access cycles through the implementation of concurrently operating pipeline stages. However, in the past, this was only possible with considerable costs due to additional control logic/registers resulting in larger chip sizes.
In DRAM device technology, modes of operation such as hyper-page and EDO (extended data out) have been implemented in an attempt to optimize memory access cycles and thereby increase data bandwidth. Hyper-page and EDO modes of operation are essentially the same in concept and characterized by a single row address being decoded to activate a common row referred to as a “page.” Activation of a page enables memory location therein to be randomly accessed (read from or written to) individually by decoding varying column addresses corresponding thereto.
Referring initially to
FIG. 1
, a timing diagram illustrates an example of EDO mode operation. Particularly, upon the transition of a row address strobe (RAS) signal from a high logic level (e.g., +3.3) to a low logic level (e.g., 0V), a single row address is decoded thereby activating said row (page). Next, upon the transition of a column address strobe (CAS) signal from a high logic level to a low logic level, the first column address is decoded and the data corresponding to that column address in the particular activated row (page) is read from the memory location and placed on the external data input/output (DQ) lines of the DRAM device.
If a write operation is being performed, then the selected memory location is provided with data present on the DQ lines of the memory device. Nonetheless, the next column access is received (i.e., next transition of CAS from a high logic level to a low logic level) and the next memory location is accessed in that particular row (page). Data is then either read from or written to the selected memory location in a similar manner as explained above. Such a memory access procedure continues for each occurrence of a new column address (i.e., low logic level CAS).
A time interval t
AA
is shown in FIG.
1
and is defined as the time interval measured from the beginning of a column address transition to the time when data is available to be externally read on the DQ lines. This time interval t
AA
is critical in such operational modes because, as shown in
FIG. 1
, the data must be available to be read by the end of this time interval or else the next column access will occur thereby destroying the data from the previous column access. A major difference between fast-page and hyper-page mode (EDO) operations is that in the former, the data associated with a previous column access is destroyed when CAS transitions to a high logic level while, in the latter, the data from the previous cycle is not destroyed until CAS begins to transition again from a high logic level to a low logic level. Nonetheless, it is to be appreciated that the time interval t
AA
is the time parameter which limits the ability to increase the frequency of CAS occurrences (i.e., CAS frequency) and, therefore, limits the data bandwidth realizable in these particular modes of operation.
More recently, an alternative mode of operation has been developed which is known as pipelined nibble mode (referred hereinafter as PNM). PNM operation, also referred to as burst EDO, is a mode of operation which involves pipelined read access of a particular DRAM device. The major difference between fast-page mode or hyper-page mode and PNM is that in the former, data is available on the DQ lines (or retrievable from the DQ lines) before the next column access (i.e., before the occurrence of the next CAS transition to a low logic level) while in PNM or burst EDO mode, there exists a latency period which dictates that data is not provided to be read externally (from the DQ lines) until some time after the second low logic level CAS, e.g., before the third CAS low occurrence. Such a CAS latency allows for pipelining and, thus, much higher CAS frequencies (i.e., greater than approximately 100 Megahertz).
Referring to
FIG. 2
, a timing diagram illustrates an example of PNM operation. Particularly, similar to EDO mode operation, a single row address is decoded thereby activating that row of memory locations upon the transition of RAS from a high logic level to a low logic level. Next a first column address is presented and decoded in accordance with the first occurrence of a low logic level transition of CAS; however, unlike EDO mode, the data is not placed on the external DQ lines until the second CAS occurrence. Further, as shown in
FIG. 2
, the data is not destroyed on (i.e., lost from) the DQ lines until the third transition of CAS to a low logic level and, thereafter, data is continuously provided for a fixed number of CAC cycles (i.e., burst of several data words). While a burst of only two data words is depicted in
FIG. 2
, it is to be understood that PNM will support higher quantities of words per burst (e.g., four, eight, etc.). Also, after a burst of n words, a new (random) column address must be presented to the device at the nth CAS occurrence.
Several advantages flow from such PNM operation. First, as shown in
FIG. 2
, one column access (CAS transition to low logic level) yields a multiple word burst. However, even more significant is the fact that because data is not required on the external DQ lines until after the second CAS occurrence. This allows a significantly longer time interval t
AA
within which to operate. As a result of the longer time interval t
AA
, pipeline stages can be formed to increase CAS frequency.
On the other hand, SDRAM device technology has also attempted to optimize memory access cycles while working within the confines of uniform clock periods defined by a system clock which provides memory access synchronization. A typical manner in which SDRAM devices operate is as follows. A column address is presented and decoded in the first clock period. Within the next clock period, the decoded address is utilized to bring up (activate) appropriate column select lines and sense the addressed memory locations. In the third clock period, the decoded address is used to actually retrieve the data from the appropriate memory locations and place such data on the DQ lines.
While it requires three clock periods before the SDRAM device outputs data, each period thereafter yields data, thus, providing a continuous data output. Similar to PNM operation in DRAM devices, a longer time interval t
AA
would be possible which would allow for pipeline operation in order to get continuous (burst) data out after the first memory access propagates through the memory device.
However, in order to achieve the above-described benefits associated with optimizing memory access cycles in cooperation with the latency associated with data (DQ) validation after two or more CAS cycles (referred to hereinafter as CAS latency), it would be necessary to include additional pipestage circuitry, latches and other DRAM and SDRAM specific control logic to the memory device, itself, and/or to the associated controlling circuitry. For example, with respect to SDRAM devices, each pipe stage ideally would have to be of the same
Poechmueller Peter
Watanabe Yohji
Choi Woo
F. Chau & Associates LLC
Kim Matthew
Siemens Aktiengesellschaft Kabushiki Kaisha Toshiba
LandOfFree
Methods and apparatus for increasing data bandwidth in a... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Methods and apparatus for increasing data bandwidth in a..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Methods and apparatus for increasing data bandwidth in a... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3250999