System for minimizing memory bank conflicts in a computer...

Electrical computers and digital processing systems: memory – Storage accessing and control – Control technique

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S005000, C711S105000

Reexamination Certificate

active

06622225

ABSTRACT:

STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH OR DEVELOPMENT
Not applicable.
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention generally relates to a computer system that includes one or more dynamic random access memory (“DRAM”) devices for storing data. More particularly, the invention relates to a computer system with DRAM devices in which multiple banks of storage can be accessed simultaneously to enhance the performance of the memory devices. Still more particularly, the present invention relates to a system that effectively minimizes simultaneous accesses to the same bank of memory to avoid access delays.
2. Background of the Invention
Almost all computer systems include a processor and a system memory. The system memory functions as the working memory of the computer system, where data is stored that has been or will be used by the processor and other system components. The system memory typically includes banks of dynamic random access memory (“DRAM”) circuits. According to normal convention, a memory controller interfaces the processor to a memory bus that connects electrically to the DRAM circuits. While DRAM circuits have become increasingly faster, the speed of memory systems typically lags behind the speed of the processor. Because of the large quantity of data that is stored in the system memory, it may at times be a bottleneck that slows down the performance of the computer system. Because of this disparity in speed, in most computer systems the processor must wait for data to be stored (“written”) and retrieved (“read”) from DRAM memory. The more wait states that a processor encounters, the slower the performance of the computer system.
Data generally is transferred between DRAM and other system components (such as the processor) in two steps. First the accessing component causes signals to be generated on the memory address bus representing the row address of the desired memory location, which is latched into the DRAM when the row address strobe (“RAS”) signal is asserted low. At the next, or on subsequent clock cycles, the memory device latches in the column address signal when the column address strobe (“CAS”) is asserted low. During a write transaction, data typically is written into memory on the falling edge of the CAS signal, when the write enable (“WE”) signal is active. In a read cycle, data from the selected memory cell is driven onto the data out lines shortly after the assertion of the CAS signal while the write enable (“WE”) is inactive.
The speed of memory circuits typically is based on two timing parameters. The first parameter is memory access time, which is the minimum time required by the memory circuit to set up a memory address and produce or capture data on or from the data bus. The second parameter is the memory cycle time, which is the minimum time required between two consecutive accesses to the memory circuit. The extra time required for consecutive memory accesses in a DRAM circuit is necessary because the internal memory circuits require additional time to recharge (or “precharge”) to accurately produce data signals.
Because DRAM circuits typically operate slower than the processor and other system components, most computer systems provide certain high-speed access modes for DRAM circuits. An example of a prior art high-speed access mode is the page mode. The page mode enables faster memory operations by allowing successive memory accesses to the same page of memory to occur, because the row address need not be re-loaded, and thus all that is required for the subsequent memory access is to strobe the next column addresses to the DRAM. Thus, the time required to set up (or precharge) and strobe the row address for the same memory page is eliminated.
In addition, the assignee of the present invention has developed a memory access technique which permits certain memory operations to be pipelined, thus allowing certain memory operations to be performed in parallel. Thus, for example, and as set forth in more detail in certain of the co-pending applications filed concurrently with the this application and mentioned in the related applications section, multiple memory accesses may be ongoing at the same time. For example, one bank of memory may be precharged, while another memory bank is latching a row address, and a third memory bank is latching a column address. In this system, therefore, multiple memory operations may be performed in parallel to different memory banks in the system memory. A its problem, however, arises, if two memory accesses are made to the same memory bank, but not the same page. When a processor or other component attempts to access a memory bank that is already the subject of a memory access, a bank conflict occurs. A bank conflict degrades performance in a pipelined memory system, because a transaction to a memory bank that is already being accessed cannot be completed until the first transaction is completed. Thus, if a bank conflict arises, the memory access must be processed serially, and the advantages of the parallel memory system are lost while the bank conflict is resolved. Consequently, the typical approach is to compare new memory requests with the DRAM memory bank state to identify bank conflicts and to stall the new conflicting request until the first memory transaction is fully completed.
Memory systems with a large number of memory banks present an opportunity for increased parallelism. With increased parallelism of the memory system comes the need to track the use of more memory banks simultaneously. In particular, it is advantageous to track the new memory requests to determine if any request targets a memory bank that already is the target of a current memory transaction, or a transaction that has been scheduled for execution. In the event a new memory request results in a bank conflict with a scheduled or executing memory transaction, the memory controller can theoretically re-order the newly requested transactions to achieve a greater efficiency. Implementing such a system can, however, be extremely complex. Parallel memory systems may have numerous memory transactions queued, waiting to be executed. In addition, new memory requests may also be entered in a pending request queue while they wait to be placed in the memory transaction queue. Thus, to identify potential bank conflicts, it is necessary to compare all of the entries in the pending request queue with the DRAM memory bank state. An optimal implementation of this comparison (of multiple queue entries with multiple queue entries) can require a substantial amount of circuitry.
It would be advantageous if a simple technique could be used to compare pending memory requests with DRAM memory transactions in order to identify bank conflicts. It would also be advantageous if any such conflicting transactions could be re-ordered to avoid the bank conflicts, while continuing to process other non-conflicting transactions without delaying operation of the DRAM. Despite the apparent performance advantages of such a system, to date no such system has been implemented.
SUMMARY OF THE INVENTION
The problems noted above are solved in large part by the system and techniques of the present invention, which avoids delays resulting from bank conflicts using a system that compares a pending memory request on each clock cycle with all entries in a DRAM transaction queue. When a bank conflict is detected, the memory controller rejects the new conflicting transaction and does not transfer it to the DRAM transaction queue. On each subsequent clock cycle, the next pending memory request is similarly examined. Comparisons that do not produce a bank conflict are loaded in the DRAM transaction execution for execution, while those that produce a bank conflict are recycled in the pending request queue.
According to the preferred embodiment, a pending request queue stores requests that have been sent by the processor or other system component, but which have not yet been formatted for the memory and stored in the DRAM transaction queue prior

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

System for minimizing memory bank conflicts in a computer... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with System for minimizing memory bank conflicts in a computer..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and System for minimizing memory bank conflicts in a computer... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3021433

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.