Apparatus and method for enhancing data transfer to or from...

Electrical computers and digital processing systems: support – Synchronization of clock or timing signals – data – or pulses

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C713S600000

Reexamination Certificate

active

06226755

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
This invention relates to computer system and, more particularly, to a system and method for maximizing data transfer efficiency across a memory bus employing synchronous dynamic random access memory (“SDRAM”).
2. Description of the Related Art
A dynamic random access memory (“DRAM”) device is generally well-known as containing an array of memory elements organized within rows and columns. A read or write access begins by transmitting a row address on the address bus and asserting the RAS control signal to latch the address to an appropriate row within the DRAM. The RAS control signal must be maintained, while the column address is transmitted on the multiplexed address bus and the CAS control signal is asserted. Strobing the CAS control signal will select the desired data word to be sensed from the addressed row. This word can then be transmitted back to the processor or direct memory access (“DMA”) device via a memory controller, in the case of a read access. In instances of write access, the information on the data bus is written into the column amplifiers and the modified row is restored within the memory array.
An important requirement of DRAM technology is that the RAS control signal must be maintained during the time in which access is desired. If a burst of data is to be read, then the amount of time at which the RAS control signal is maintained asserted is limited by the need to periodically pre-charge the row being read. An improved memory architecture, known as synchronous DRAM (“SDRAM”), minimizes the need to maintain assertion of the RAS control signal during a read access. SDRAM merely requires pulsing of the RAS control signal only for the set-up and hold times relative to a particular clock edge. The row being addressed by the pulsed RAS control signal will remain active until a deactivate command is given. In addition to the benefits of merely strobing a RAS control signal and therefore maximizing pre-charge time, the SDRAM architecture beneficially synchronizes its commands to the memory bus clock, which is derived as a ratio of the processor clock. Thus, an SDRAM operation is said to be synchronized with the clock which operates the processor or central processing unit (“CPU”).
The various commands, such as RAS, CAS, activate, and deactivate, are given on a rising edge of the system clock. The deactivate command initiates the pre-charge operation of the previously accessed row, whereby voltage upon the memory elements being read is restored from the latched values on the corresponding column amplifiers. The pre-charge operation thereby serves to restore charge on the read capacitors occurring during a sense operation. Over time, however, the pre-charge operation will not be sufficient to replace charge which “leaks” from the storage capacitor. The storage cell and, more particularly, capacitors within respective cells must therefore be periodically refreshed. Typically there is a maximum time over which every cell must be read and then written back (i.e., refreshed) to guarantee proper data retention. As such, the refresh operation of an SDRAM must be monitored so that each cell is timely recharged, either through a self-refresh technique or a CAS-before-RAS (“CBR”) refresh technique, for example. Regardless of the refresh technique chosen, the refresh mechanism generally employs a counter which moves through rows of the array during each count cycle, refreshing charge from corresponding column amplifiers in its wake. Because each SDRAM integrated circuit or chip is typically divided into internal, or logical banks, read and write operations associated with each row refresh must be performed on each bank.
All data for the SDRAM device is generally written or read in burst fashion. Given a row address and an initial column address, the SDRAM internally accesses a sequence of locations beginning at the initial column address and proceeding to succeeding column addresses depending on the programmed length of the burst sequence. The sequence can be programmed to follow either a serial-burst sequence within one of the internal banks or an interleaved-burst sequence among the pair of internal banks. Conventional SDRAMs can be programmed to read or write a burst of one, two, four, eight, or more bits.
Reading N-bits of data during a read burst operation beneficially reduces the control signal overhead needed to transfer that burst of data across the system memory bus and to the memory requester. An additional benefit would be gained if several SDRAM chips can be grouped together and selected from a common chip select signal. For example, the SDRAM system can be partitioned and each partition may contain at least one SDRAM chip. If multiple SDRAMs are implemented in a single partition, then that partition can be interconnected on a printed circuit board, often denoted as a memory module, DIMM or SIMM. The chip select signal is thereby used to select a single partition among many partitions, each partition containing one or more SDRAM chips.
A benefit would be gained if the SDRAM system employs a relatively wide memory bus, possibly one which can transfer an entire cache line of 256 bits during a four system clock cycle—assuming a 64-bit wide single clock cycle transfer. However, it is contemplated that an entire cache line could be transferred if the memory bus is to exceed 64 bits in width, i.e., the memory bus is an entire cache line in width. Such a bus transfer architecture could be achieved by configuring each partition with a DIMM. In this fashion, a partition could transfer four quadwords from, for example, a 64 data pin DIMMs during each clocking cycle of the system clock. Transferring an entire cache line would prove especially useful when fetching a cache line of data to the processor cache, or when performing cache-line direct memory access (“DMA”) transfers.
However, to take full advantage of any transfer efficiency upon the memory bus, the pre-charge and refresh operations must be accounted for and, more particularly, must be “hidden” from cycles present on the memory bus. An improved transfer technique must therefore be employed to ensure data transfers are not broken whenever a portion of the SDRAM system is being pre-charged or refreshed. In this manner, the cache lines of data are seamlessly forwarded on each clocking signal cycle for optimum memory bus bandwidth.
SUMMARY OF THE INVENTION
The problems outlined above are in large part solved by an improved memory bus transfer technique hereof. The present transfer technique is one which can perform three of more consecutive and unbroken fetches of cache line data from the memory to a requesting device. The requesting device can be either the processor or a peripheral performing access through a DMA cycle. The three consecutive fetches incur a data transfer that occupies no more than 12 system clock cycles, given a burst length of four cycles per burst.
The fetched data arise from partitions within the SDRAM system. Those partitions may include a minimum of one SDRAM chip. If an entire cache line is desired to be transferred within a single system clock cycle, then the partition being read includes a plurality of SDRAM chips commonly connected through a chip select signal. The partitioned group of chips can be arranged on a separate printed circuit board, such as that attributed to a DIMM.
During the time in which one partition is being read (i.e., data being transferred therefrom), another partition may undergo a refresh or pre-charge operation. However, the refresh or pre-charge operations which occur within any given partition do not occur consecutively between read requests attributed to another partition or another pair of partitions. In other words, the present transfer technique ensures the refresh and pre-fetch operations of one partition be separated in time by a read request attributed to another partition. In the example in which three partitions are used, the present transfer technique purposely interposes a read request to a second pa

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Apparatus and method for enhancing data transfer to or from... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Apparatus and method for enhancing data transfer to or from..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Apparatus and method for enhancing data transfer to or from... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2473693

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.