High performance multi-bank compact synchronous DRAM...

Static information storage and retrieval – Addressing – Plural blocks or banks

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C365S233100

Reexamination Certificate

active

06442098

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
The invention relates to semiconductors and more particularly to memory devices such as Synchronous Dynamic Random Access Memory devices.
2. Discussion of Related Art
Conventional Dynamic Random Access Memory (DRAM), of the type that has been used in PCs since the original IBM PC, is said to be asynchronous. This refers to the fact that the operation of the memory is not synchronized to the system clock but depends entirely on the timing inherent in the memory device regardless of the frequency of the system clock.
For example, referring to
FIG. 1
, a system
100
has a processor
101
that is coupled to a memory controller
104
by way of an address bus
106
and a bi-directional data bus
108
. The memory controller
104
is, in turn, coupled to an asynchronous type memory device
110
by way of both the address bus
106
and the data bus
108
. In order to access the memory device
110
in what is referred to as either a READ or a WRITE operation, a specific procedure must be followed. Typically, the processor
101
generates a specific memory address request (also referred to as a memory page request) corresponding to the location in the memory device
110
where data (or memory page) required by the processor
101
is stored. The memory address request is passed to the memory controller
104
by way of the address bus
106
.
In conventional memory systems, the memory controller
104
generates the appropriate memory access signals that are decoded by the memory device
110
identifying the memory location in the memory device
110
where the requested data is stored. Once accessed, the stored data is output to the data bus
108
to be read by the processor
101
or whatever other device requested it. It should be noted that since the above-described operations are performed asynchronously with regard to the system clock, the processor
101
is usually required to wait for the appropriate data to be made available. These wait states degrade effective processor performance since the processor
101
cannot complete a desired operation without the requisite data from the memory device
110
.
More specifically, during, for example, a READ operation, the processor
101
generates an address request corresponding to the memory location in the memory device
110
at which the required data is stored. Since all memory chips hold their contents in a logical “square” of memory cells
112
in the form of rows
114
and columns
116
, reading data stored in, for example, the memory cell
112
a
, requires that first, a row
114
a
be activated using what is referred to as a “Row Address Select” (or “Row Address Strobe”, “/RAS”) signal that is provided by the memory controller
104
. Specifically, the RAS is a signal sent to a DRAM that tells it that an associated address is a row address. Typically, the /RAS signal is based upon a “lower half” of the address request provided by the processor
101
. When received and properly decoded, the /RAS signal causes the data in the entire row
114
a
to be transferred to a sense amp
118
after a period of time required for the selected row to stabilize.
Once the selected row has stabilized and the data in the selected row is transferred to the sense amp
118
, the memory controller
104
further decodes the address request forming what is referred to as a “Column Address Select” (“/CAS”) signal which when sent to a DRAM tells it that an associated address is a column address. The /CAS signal causes column select circuitry (not shown) to select the specific cell (in this case
112
a
) in the memory array that contains the desired data. The data stored in the cell
112
a
is then sent out to the data bus
108
from the sense amp
118
where the processor
101
or other device that requested the data can read it. It should be noted that the data bus
108
is a bi-directional data bus since during a WRITE operation, the processor
101
provides data to be stored in the memory device
110
.
FIG. 2
is a timing diagram
200
illustrating the above-described READ operation. The performance of the memory device
110
is based upon several critical timing paths that includes the duration of time between the acquisition of data at the data bus
108
and the falling edge of the /RAS signal (referred to as access time from /RAS, or t
rac
). Another critical timing path is referred to as access time to column address t
cac
is defined as the duration of time from the falling edge /CAS to the data out to data bus
110
. Any, and all, of these delays, also referred to as memory latency, degrades system performance since the speed of the DRAM is directly related to the slowest critical path.
Usually, the worst case latency in any DRAM is specified by the row access time t
RAC
that is itself composed of several components, at least two of which are directly related to data line length (and therefore chip size and bit density) and the associated capacitive loading coupled thereto (referred to as RC delay). One such component is referred to as bit line sensing latency which is defined as the time for the data stored in a memory cell to be detected by the corresponding sense amp. This bit line sensing latency is affected by many factors, including bit line architecture, the RC of the sense amp drive line, cell-to-bit line capacitance ratio, as well as sense amp topology. Another component which substantially contributes to overall memory latency is referred to as output driving latency. Output driving latency is defined as the time required for the data to be propagated from the sense amp to the output node (again an RC-type delay).
Conventional attempts to reduce t
RAC
generally strive to reduce these two components by way of various circuit and layout techniques. In the case of bit line sensing latency, since the cell-to-bit line capacitance ratio directly impacts the bit line sensing delay, increasing this ratio reduces the bit line sensing latency (by providing a higher memory cell drive current). Typically, this approach is practiced by either increasing memory cell capacitance (by increasing cell size) or by putting fewer memory cells on a single bit line. Unfortunately, however, both of these approaches increase overall cell area which reduces cell density resulting in larger chips with lower bit density and a concomitant increase in cost.
Fortunately, even with these circuit delays, the asynchronous DRAM memory device
110
works well in lower speed memory bus systems, it is not nearly as suitable for use in high-speed (>66 MHz) memory systems since each READ operation and WRITE operation can not be any faster than the memory latency which is typically on the order of 5-7 clock cycles. In order to service these high-speed systems, therefore, a relatively new and different kind of RAM, referred to as Synchronous DRAM, or SDRAM, has been developed. The SDRAM differs from earlier types of DRAM in that it is tied to the system clock and therefore does not run asynchronously as do standard DRAMs. Since SDRAM is tied to the system clock and is designed to be able to READ or WRITE from memory in what is referred to as a burst mode (after the initial READ or WRITE latency) at 1 clock cycle per access (zero wait states), the SDRAM is able to operate at bus speeds up to 100 MHz or even higher. By running at the system clock, no wait states are typically required (after initial set up) by the processor resulting in the higher system speeds.
SDRAM accomplishes its faster access using a number of internal performance improvements that include a “burst mode” capability, which allows the SDRAM to transfer multiple cells without cycling the /CAS line thereby limiting the CAS latency to the first few clock cycles of the burst read. This operation is what makes SDRAM “faster” than conventional DRAM even though the actual internal operations are essentially the same. By way of example, a 4 cycle burst READ can be accomplished in 8 clock cycles (5,1,1,1) where “5” represents the initial READ latency of 5 clock cycle

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

High performance multi-bank compact synchronous DRAM... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with High performance multi-bank compact synchronous DRAM..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and High performance multi-bank compact synchronous DRAM... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2901609

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.