Memory controller using queue look-ahead to reduce memory...

Electrical computers and digital processing systems: memory – Storage accessing and control – Access timing

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S105000, C711S136000, C711S137000, C711S154000, C711S158000, C711S163000, C711S168000, C711S169000, C713S324000

Reexamination Certificate

active

06269433

ABSTRACT:

S
TATEMENT REGARDING FEDERALLY SPONSORED RESEARCH OR DEVELOPMENT Not Applicable.
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates generally to a computer system and, more particularly, to a memory controller in a computer used to permit access to memory via a memory bus. Still more particularly, the invention relates to a memory controller that provides increased bandwidth on the memory bus.
2. Background of the Invention
Personal computers ( “PC's”) include a variety of interconnected components between which data and control information passes. Typically a computer includes a microprocessor, a non-removable storage device such as a hard disk drive, a graphics interface to a video monitor, and other components permitting an operator to perform a variety of activities such as word processing, spread sheet calculations, video games, etc.
The processor typically accesses data and/or software stored on a mass storage device. A typical microprocessor, however, is capable of receiving data or software from or providing data or software to a mass storage device much faster than the storage device is capable of providing or receiving the corresponding information. Often, a processor must access the same piece of data or the same software instruction multiple times. It thus is beneficial to expedite transfers of information to and from the processor.
To increase the speed at which the processor accesses and uses information (including data and/or software) stored on a storage device, PC's include random access memory (“RAM”). The computer's RAM memory generally comprises the computer's main working memory and includes one or more memory “chips” and typically an array of memory chips. A processor can access RAM much faster than it can access a mass storage device, such as a hard drive. The PC's main RAM memory functions as temporary storage for programs executed by the processor and data used by the processor. When the operator of the PC wishes to run a program stored on the hard disk drive, a copy of the requested program typically is transferred to the computer's main memory. Although the copy of the program is transferred from the hard disk to main RAM memory at the relatively slow transfer rate dictated by the hard disk, the processor can then retrieve each program instruction from main memory much faster than from the hard disk. In addition to the programs, a copy of any applicable data also is retrieved and placed in main RAM memory so that the processor can more rapidly access the data. The main RAM memory, however, is volatile meaning that once power is turned off to the memory chips, which occurs when the computer is turned off, the memory contents are erased.
Improvements in computer system performance usually requires an evolution of both software and hardware. Thus software and hardware development are interrelated. That is, software designers continue to develop more sophisticated software that takes advantage of faster computers. Similarly, computer designers continue to develop computers that operate faster to be able to run newer, more sophisticated software. Thus, it is desirable for a computer designer to improve the speed at which the computer operates to be able to run software faster. The computer's operational speed is determined by a number of factors including the speed at which main RAM memory is accessed by a processor or other device needing memory access. Increasing memory access speed, or, alternatively stated, reducing memory access time, contributes to increasing the overall speed at which a computer performs desired tasks.
Computer industry participants have approached the problem of increasing the speed of memory access from two basic angles. First, DRAM manufacturers continually strive to produce faster memory chips. Whereas the access time of memory chips in the early 1990's was greater than 100 nanoseconds, today the access time is on the order of 60 nanoseconds. Future memory chips undoubtedly will be even faster. The second approach is to develop faster techniques through which the computer communicates with memory. The present invention focuses on the latter of these two approaches. The following brief description of a memory subsystem in a typical PC may help to fully understand and appreciate the advantages of the present invention.
A personal computer usually includes a memory controller, which may be a discrete chip or part of another chip that controls access to the computer's main RAM memory. The memory controller couples to the RAM by way of a memory bus, which generally comprises a plurality of digital data, address, and control lines. Accessing DRAM is generally a multi-step process performed by the memory controller. First, the memory controller “opens” an appropriate “bank” of memory and then opens an appropriate “page” within the bank. Once the desired page of memory is opened, the memory controller can access the desired byte or bytes of data within the page. The memory controller may store new data in place of the existing data in a step referred to as a “write cycle.” Alternatively, the memory controller may read data from the memory in a step referred to as a “read cycle.” After a read or write cycle, the memory controller then may “close” the page and bank of memory in preparation for the next read or write cycle.
One type of DRAM memory commonly used is synchronous dynamic random access memory (“SDRAM”). Unlike conventional DRAM synchronous DRAM uses a clock signal (a signal whose voltage repeatedly oscillates between two voltage levels) provided by the computer to control (or synchronize) the SDRAM's internal timing. Synchronous DRAM offers several advantages over conventional DRAM which does not run off of a clock signal. Generally, SDRAM is faster, offers improved testability, higher yields, and consumes less power than conventional DRAM. Like conventional DRAM, accessing SDRAM involves multiple steps initiated by well-known commands such as “Activate,” “read/write,” “precharge,” and other commands. An “Activate” command opens, or “activates,” the desired bank and page of memory. A “read/write” command enables the memory controller to read data from or write data to the SDRAM. The bank and page opened by the “Activate” command can be closed by issuing a “precharge” command. The memory controller issues the Activate, read/write, and precharge commands to the SDRAM.
Traditionally, a memory controller only opens one page of memory in a bank at a time. Thus, if a current memory request, be it a read or write cycle, is to page x in a bank and the next pending memory request is to page y in the same bank, page x first is closed, or precharged, before the next memory cycle to page y is started.
FIG. 1
illustrates this process in which two memory write cycles, cycles A and B, are shown in a timeline. Write cycle A begins with the memory controller issuing an Activate command to activate the memory bank and page where the write data of cycle A is to be stored. The data then is provided to the SDRAM when the memory controller issues a write command. Assuming the data pertaining to the next write cycle, cycle B, is destined for a page or bank different from that of write cycle A, the traditional memory controller precharges the bank associated with write cycle A before starting the Activate command for write cycle B. This process results in a period of time, indicated by reference number
20
, between the write commands of each write cycle A and B in which no data is being transferred on the memory bus between the memory controller and the SDRAM. Period of time
20
represents “dead time” because the memory bus is not used to transfer data between memory controller and main memory. It is desirable to minimize, if not eliminate, the dead time
20
on a memory bus, because doing so maximizes the percentage of time during which data is transmitted across the bus.
It would thus be advantageous to design a memory controller for a computer system that maxim

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Memory controller using queue look-ahead to reduce memory... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Memory controller using queue look-ahead to reduce memory..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Memory controller using queue look-ahead to reduce memory... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2479444

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.