Apparatus and method for device timing compensation

Electrical computers and digital processing systems: support – Synchronization of clock or timing signals – data – or pulses

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C713S401000, C713S503000

Reexamination Certificate

active

06226754

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
This invention relates generally to digital electronic systems. More particularly, this invention relates to techniques for efficiently transferring information in digital electronic systems.
2. Description of the Related Art
In a generalized multi-device digital electronic system, there can be multiple aster and slave devices which are connected by an interconnect structure, as shown in FIG.
1
. Wires between the components form the interconnect. Transport of information over the interconnect occurs from transmitter to receiver, where the master or the slave components can act as either transmitter or receiver.
One particularly interesting case is when the slave is a memory device and there is a single master, as shown in FIG.
2
. Because of the high occurrence of read operations in typical memory reference traffic, an important case is the transmission of control information from master to slave and the return transmission of read data from slave to master. The round trip delay forms the read latency.
In a pipelined system, total delay to perform an operation is divided into clock cycles by dividing the entire datapath into separate pipe stages. In a pipelined memory system, total read latency is also divided into clock cycles. As operating frequency increases, delay variations from both the interconnect and components are exposed. These delay variations can cause logical device-to-device conflicts which make the operation pipeline less efficient. It is thus desirable to compensate for these timing variations, which can occur depending on the position of the memory parts on the channel and internal delays in the memory devices.
Before discussing the sources of timing variation in a memory system, some background information on the structure and operation of memory cores is provided.
Memory Structure and Operation
In this section memory operations are defined.
FIG. 3
illustrates a memory with a memory core and a memory interface. The memory interface interacts with an interconnect structure. The following discussion expands upon the generic memory elements of
FIG. 3
to identify separate structural elements and to discuss the memory operations and memory interactions with the interconnect.
General Memory Core
In this subsection the structure of memory cores into rows and columns is illustrated and the primitive operations of sense, precharge, read, and write are introduced.
A simple memory core typically consists of a storage array, column decoder, row decoder, and sense amplifiers, as shown in FIG.
4
. The interface
100
to a memory core generally consists of a row address
101
, column address
103
, and data path
102
. The storage array, shown in
FIG. 6
, is organized into rows and columns of storage cells, each of which stores one bit of information. Accessing the information in the storage array is a two step process. First, the information is transferred between the storage array and the sense amplifiers. Second, the information is transferred between the sense amplifiers and the interface via connection
100
.
The first major step, transferring information between the storage array and the sense amplifiers, is called a “row access” and is broken down into the minor steps of precharge and sense. The precharge step prepares the sense amplifiers and bit lines for sensing, typically by equilibrating them to a midpoint reference voltage. During the sense operation, the row address is decoded, a single word line is asserted, the contents of the storage cell is placed on the bit lines, and the sense amplifier amplifies the value to a full rail state, completing the movement of the information from the storage array to the sense amplifiers. An important observation is that the sense amps can also serve as a local cache which stores a “page” of data which can be more quickly accessed with column read or write accesses.
The second major step, transferring information between the sense amplifiers and the interface, is called a “column access” and is typically performed in one step. However, variations are possible in which this major step is broken up into two minor steps, e.g. putting a pipeline stage at the output of the column decoder. In this case the pipeline timing has to be adjusted.
From these two major steps, four primary memory operations result: precharge, sense, read, and write. (Read and write are column access operations.) All memory cores support these four primary operations or some subset of these operations. As later sections describe, some memory types may require additional operations that are required to support a specific memory core type.
As shown in
FIG. 5
, memory cores can also have multiple banks, which allow simultaneous row operations within a given core. Multiple banks improve memory performance through increased bank concurrency and reduced bank conflicts.
FIG. 5
shows a typical core structure with multiple banks. Each bank has its own storage array and can have its own set of sense amplifiers to allow for independent row operations. The column decoder and datapath are typically shared between banks.
FIG. 6
shows the generic storage array structure. As shown, the word line (
106
) accesses a row of storage cells, which in turn transfers the stored data on to the bit lines (
107
). While the figure shows a pair of bit lines connected to each storage cell, some core organizations may require only one bit line per cell, depending on the memory cell type and sensing circuits.
The general memory core just described provides the basic framework for memory core structure and operations. However, there are a variety of core types, each with slight differences in structure and function. The following three sub-sections describe these differences for each major memory type.
Dynamic RAM (DRAM)
This section describes the structure and primitive operations for the conventional DRAM core. The structure of a conventional DRAM core is shown in FIG.
7
. Like the generic memory core in
FIG. 4
, the conventional DRAM structure has a row and column storage array organization and uses sense amplifiers to perform row access. As a result, the four primary memory operations, sense, precharge, read and write, are supported. The figure shows an additional “column amplifier” block, which is commonly used to speed column access.
The core interface
100
consists of the following signals: row address
101
, column address
103
, data I/O bus
106
, row control signals
107
(these signals are defined in detail further in this section), and column control signals
108
(these signals are defined in detail further in this section).
FIG. 8
shows a conventional DRAM core with multiple banks. In this figure, the row decoder, column decoder, and column amplifiers are shared among the banks. Alternative organizations can allow for these elements to be replicated for each bank, but replication typically requires larger die area and thus greater cost. Cheap core designs with multiple banks typically share row decoders, column decoders and column datapaths between banks to minimize die area.
Conventional DRAM cores use a single transistor (1T) cell. The single transistor accesses a data value stored on a capacitor, as shown in FIG.
9
. This simple storage cell achieves high storage density, and hence a low cost per bit, but has two detrimental side effects. First, it has relatively slow access time. The relatively slow access time arises because the passive storage capacitor can only store a limited amount of charge. Row sensing for conventional DRAM takes longer than for other memory types with actively-driven cells, such as SRAM. Hence, cheap DRAM cores generally result in slow row access and cycle times. Another problem is that cell refresh is required. Since the bit value is stored on a passive capacitor, the leakage current in the capacitor and access transistor result in degradation of the stored value. As a result, the cell value must be “refreshed” periodically. The refresh operation consists of reading the c

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Apparatus and method for device timing compensation does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Apparatus and method for device timing compensation, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Apparatus and method for device timing compensation will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2449318

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.