Electrical computers and digital processing systems: memory – Storage accessing and control – Control technique
Reexamination Certificate
1997-07-02
2001-03-13
Ellis, Kevin L. (Department: 2751)
Electrical computers and digital processing systems: memory
Storage accessing and control
Control technique
C711S005000, C710S120000
Reexamination Certificate
active
06202133
ABSTRACT:
TECHNICAL FIELD
The present invention relates to computer memory access, and more particularly, to a method of accessing dual memory arrays with dual memory controllers.
BACKGROUND OF THE INVENTION
A computer system relies on memory to store instructions and data that are processed by a computer system processor. Breathtaking advances have been made in both the storage capacity and speed of computer memory devices. However, the speed of memory devices has not been able to keep pace with the speed increases achieved with current microprocessors. As a result, the speed of current computer systems is limited by the speed in which data and instructions can be accessed from the system memory of the computer system.
In a typical computer system, the computer system processor communicates with the computer memory via a processor bus and a memory controller. The computer system memory typically includes a dynamic random access memory (DRAM) module, such as a single in-line memory module (SIMM) or a dual in-line memory module (DIMM). The memory module typically includes one or more banks of memory chips connected in parallel such that each memory bank stores one word of data per memory address.
One reason for delay in typical memory modules is that each memory chip includes one or more data lines that handle both data being written into the memory chip and data being read from the memory chip. Likewise, the memory controller may include a data bus that handles data written to and read from each memory chip. Alternatively, the data bus of the memory chip may be coupled directly to a data bus portion of the processor bus. As a result, each time access to the memory switches from a read to a write or a write to a read, data must go completely through the memory data bus, and possibly the memory controller data bus, before data can be sent through the busses in the opposite direction. The time it takes to wait for the memory bus and possibly the memory controller bus to switch from one direction to the opposite direction is known as bus turn-around time and typically is at least one clock cycle of delay.
In a typical DRAM memory, each memory chip contains an array of memory cells connected to each other by both row and column lines. Each memory cell stores a single bit and is accessed by a memory address that includes a row address that indexes a row of the memory array and a column address that indexes a column of the memory array. Accordingly, each memory address points to the memory cell at the intersection of the row specified by the row address and the column specified by the column address.
In order to limit their size, each memory chip typically includes only enough address pins to specify either the row address or the column address but not both simultaneously. As a result, the typical memory controller accesses a memory location sequentially by first transmitting the row address and then transmitting the column address. Specifically, the memory address controller places the row address on the memory address bus, asserts a row address select (RAS) control signal, then places the column address on the memory address bus and asserts a column address select (CAS) control signal. To ensure proper timing, the memory controller delays briefly after asserting the RAS control signal and before asserting the CAS signal (RAS/CAS delay).
Another memory delay, known as pre-charge delay, typically occurs after each memory read. A memory read of a DRAM location is implemented by discharging the memory cell and then completely recharging the memory cell. The pre-charge delay refers to the amount of time that it takes to complete the recharging step.
A technique known as “page mode” has been developed to eliminate the RAS/CAS and pre-charge delays when successive accesses to the same row of memory occur (each row is known as a “page” and typically is four kilobytes (KB)). Because the majority of program execution is sequential in nature, program execution very often proceeds along a row of memory. When in page mode, a row comparator in the memory controller compares the row address of the memory location currently being accessed with the row address for the next memory access. If the row addresses are the same (known as a “page hit”), then the row comparator causes the memory controller to continue asserting the RAS control signal at the end of the current bus cycle. Because the memory already has the correct row address, the new column address can be immediately transferred to the memory without requiring a RAS/CAS or pre-charge delay. However, if the row addresses of the current and next memory requests are different (known as “page miss”) then RAS/CAS and pre-charge delays are incurred.
The number of pre-charge delays also can be reduced by splitting the system memory into two memory banks and interleaving the memory locations in the two banks. Interleaving refers to reading or writing consecutive data in alternate memory banks such as storing all even-addressed data in the first memory bank and all odd-addressed data in the second memory bank. When an interleaved system memory is employed to implement consecutive reads of consecutively addressed data items, the second data item can be read from the second memory bank while the first memory bank is being pre-charged after the first data item is read from the first memory bank. As a result, the pre-charge delay is hidden each time that a data item is accessed from a memory bank that is different from the memory bank from which the previous data item is accessed.
Although the procedures discussed above reduce the number of delays in accessing data, typical computer memory still is relatively slow compared to the computer system processor bus. For example, a standard type of memory known as Extended Data Out (EDO) memory typically operates at half of the rate of the processor bus. Moreover, the delays discussed above still occur on a regular basis. In particular, read/write delays still occur when switching from a read to a write or vice versa, RAS/CAS delays still occur when switching between memory rows accessed, and pre-charge delays still occur for consecutive reads to the same memory bank. This situation is made worse when one or more memory requesters submit memory requests simultaneously in addition to the memory requests from the computer system processor. Prior art memory controllers simply employ a rotational priority scheme in which the particular memory requester enabled to submit a memory request is switched after each memory request. Such a rotational priority scheme reduces the chances of receiving consecutive read or write requests to the same memory row and increases the number of switches from reads to writes and the number of consecutive requests to the same memory bank, thereby increasing the number of memory access delays.
SUMMARY OF THE INVENTION
An embodiment of the present invention is directed to a method of operating a computer system having first and second random access memory (RAM) modules for storing digital information. The computer system also includes first and second system controllers coupled to the first and second RAM modules, respectively. The first system controller has a first address decoder that allocates to the first RAM module a first set of addresses. The second system controller has a second address decoder that allocates to the second RAM module a second set of addresses, having no addresses in common with the first set of addresses. By employing two system controllers to control two RAM modules, two memory transactions can be executed simultaneously to eliminate or reduce the number of read/write, RAS/CAS, and pre-charge delays incurred.
In one embodiment of the method, the first set of addresses is interleaved with the second set of addresses such that an address block of the second set is immediately proceeded and immediately followed by address blocks of the first set. The size of the address blocks being interleaved can be a page, cache-line, word, or any other size smaller than the sizes of the
Dorsey & Whitney LLP
Ellis Kevin L.
Micro)n Technology, Inc.
LandOfFree
Method of processing memory transactions in a computer... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Method of processing memory transactions in a computer..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method of processing memory transactions in a computer... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2440037