Stored write scheme for high speed/wide bandwidth memory...

Static information storage and retrieval – Addressing – Plural blocks or banks

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C365S189050

Reexamination Certificate

active

06351427

ABSTRACT:

TECHNICAL FIELD
The present invention relates generally to semiconductor memory devices, and more particularly to circuits for reading data from, and writing data into, the memory cells of a memory device.
BACKGROUND OF THE INVENTION
Computing systems typically include a computing device (a microprocessor for example) for manipulating data, and a storage device for storing data for use by the computing device. A common type of storage device is a semiconductor random access memory (RAM). In order to provide the best system performance in a computing device, it is desirable to allow the computing device to operate as fast as possible, and never be forced into an idle state while waiting to receive or store data. To achieve this result, it is important to provide a data storage device that will read and write data as quickly as possible. This gives rise to an important aspect of semiconductor memory device performance: the rate at which data can be read from, or written into the device (often referred as “bandwidth”).
A typical RAM includes one or more arrays having memory cells arranged in rows and columns. The memory cells are accessed in read and write operations by way of a data bus. While large data buses can increase the bandwidth of a RAM, such an approach incurs the penalty of increasing the physical size of the RAM. For this reason, in RAMs which include multiple arrays, a data bus is typically a “global” bus. That is, the data bus is commonly connected to a number of arrays. Further, to reduce the area of a RAM, the data bus is often “shared.” That is, the same set of lines within the data bus that are used to write data to the array, are also used to read data from the arrays. Thus, if a write operation is sending data into a memory array by way of the data bus, the write operation must be completed before a subsequent read operation can retrieve data from a memory array. Otherwise, the input and output data would both be on the data bus simultaneously, resulting in erroneous operation of the RAM. Any delay incurred between write and read operations is undesirable, because the computing device of the system may have to wait during such a delay in order to complete its computing function. The time period in which the computing system must wait for a data access operation of a storage device is often referred to as a wait state. Wait states are to be avoided, if possible, because they reduce the efficiency of system data bus timing, and hence reduce bandwidth.
To more clearly illustrate the occurrence of wait states that occur in a RAM operation, a block schematic diagram of a RAM is set forth in FIG.
1
. The RAM is designated by the general reference character
100
, and is shown to include a number of memory banks, beginning with a first memory bank
102
a
, a second memory bank
102
b
, and terminating in a last memory bank
102
n.
Each memory bank (
102
a
-
102
n
) can include more than one memory cell array. The storage locations within each memory bank (
102
a
-
102
n
) are accessed by corresponding row decoders (
104
a
-
104
n
) and column decoders (
106
a
-
106
n
). The row decoders (
104
a
-
104
n
) are each coupled to a row address buffer
108
by a row address bus
110
. In a similar fashion, the column decoders (
106
a
-
106
n
) are each coupled to a column address buffer
112
by a column address bus
114
. The RAM
100
further includes an address latch
116
for receiving and latching address information from a “multiplexed” address bus
118
. The multiplexed address bus
118
is “multiplexed” in the sense that it receives either row address or column address information. The column address buffer
112
receives column address information from both address latch
116
and the multiplexed address bus
118
.
The various functions of the RAM
100
are initiated by a command decoder
120
. In response to information provided on a command bus
122
and/or the multiplexed address bus
118
, the command decoder
120
activates a collection of control signals. Five control signals are illustrated in
FIG. 1
, a STORE signal, a READ signal, a WRITE signal, a COLINIT signal, and an ICLK signal. The STORE signal results in a column address being latched in the address latch
116
. The READ signal initiates the internal read operation. The WRITE signal indicates an internal write function. It is noted that for the purposes of this discussion the distinction between a write operation and an internal write function should be kept in mind. The internal write function is the final step in a write operation, and includes the act of physically writing data into the memory cells of the array.
Referring once again to the control signals provided by the command decoder
120
, it is noted that the COLINIT signal pulses high at the start of a column access. The ICLK signal pulses high for each bit in pre-fetch operation. Pre-fetch operations will be described below. The particular RAM
100
disclosed is a synchronous RAM, and so the RAM
100
operations are synchronous with an externally applied clock, shown as CLK.
Referring once again to
FIG. 1
, it is shown that the column decoders (
106
a
-
106
n
) are coupled to a write circuit
124
and a read circuit
126
by a shared data bus
128
. The data bus is “shared” in that it is used for both read and write operations. The operation of the write and read circuits (
124
and
126
) is controlled by a shift clock circuit
130
that generates a SHFTCLK signal. In response to the SHFTCLK signal, the write circuit
124
couples data from an I/O bus
132
to the shared data bus
128
, or the read circuit
126
couples data from the shared data bus
128
to the I/O bus
132
. Data is placed on the I/O bus
132
at a number of data I/Os
134
.
The architecture of the RAM
100
in
FIG. 1
is referred to as a “pre-fetch” architecture. A pre-fetch architecture is one in which multiple data bit sets are read from an array at one time, and can be sequentially output, one set after the other. For example, in an eight bit pre-fetch architecture, for each data output, eight bits are read from a memory bank, and will be available to be output. In other words, in case of
FIG. 1
(which includes an 8 bit pre-fetch), the read operation will initially retrieve 128 bits of data. This data can then be output in eight sets of 16 bits. Pre-fetch architectures can be particularly advantageous for “burst” mode RAMs. In a burst mode RAM a sequence of addresses are accessed by the application of single address. By utilizing a pre-fetch architecture, all bits required for the burst sequence are available with one read operation, obviating the need to address a memory bank a multiple number of times.
Because the RAM
100
of
FIG. 1
is a pre-fetch architecture, the shared data bus
128
is larger than I/O bus
132
by a multiple equivalent to the size of the pre-fetch. For example, if the I/O bus
132
was 16 bits wide, and the RAM
100
allowed for an eight bit pre-fetch, the shared data bus
128
would be 128 bits wide. In addition, there would be an eight bit latch circuit associated with data I/O to store the eight pre-fetched bits. Data would be sequentially output from the latches in response to a number of SHFTCLK signals.
Pre-fetch architecture can also be used in increase the speed and efficiency with which data is written into a memory bank. For example, each data I/O could include eight data input latches. In a write operation, for each data I/O, eight data bits could then be sequentially entered. Once all of the data input latches contain data, a single internal write function can simultaneously write all latched data bits. For example, in the RAM
100
of
FIG. 1
, the write circuit
124
could include 128 latches. Eight sets of 16 bits could be sequentially entered into the latches, and then written along the 128 shared data lines into memory banks.
An example of a write operation for one variation of the RAM
100
is illustrated in FIG.
2
.
FIG. 2
illustrates a conventional “non-posted” write operation followed by a read operation. T

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Stored write scheme for high speed/wide bandwidth memory... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Stored write scheme for high speed/wide bandwidth memory..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Stored write scheme for high speed/wide bandwidth memory... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2967042

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.