Pipelined SDRAM memory controller to optimize bus utilization

Electrical computers and digital processing systems: memory – Storage accessing and control – Control technique

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S105000, C711S169000

Reexamination Certificate

active

06549991

ABSTRACT:

BACKGROUND OF THE INVENTION
A. Field of the Invention
The present invention relates to a memory controller to access the memory, and more particularly to a method and apparatus providing pipelined SDRAM access to maximize bus utilization in a highly integrated system.
B. Description of the Related Art
The computer systems are becoming ever more powerful by the day and the computer speeds dramatically increased recently. As the VLSI technology is moving to high densities of integration, system on chip (S.O.C.) becomes a possible design methodology. For example, the chipset in a personal computer system may integrate north-bridge and VGA function into single chip using share memory architecture, north bridge and VGA using unique system memory together. In such application, the memory subsystem is accessed by multiple tasks (CPU requests and VGA requests) and operated in multitasking mode. Apparently, the memory data throughput dominates overall system performance.
Most DRAMs support various access modes including, for example, page mode as well as various high-speed refresh and pre-charge mechanisms. Essentially, page mode and static-column mode help to minimize wait states (i.e. times at which the CPU is suspended to allow the memory to catch up with the CPU). Another technique for minimizing wait states is to separate the DRAM into multi-banks. SDRAM has one important feature which supports random read/write access, the bank can be activated while another bank is being accessed, i.e. multi-bank ping-pong access.
In typical SDRAM configuration and feature, the typical block diagram of conventional SDRAM is illustrated in FIG.
1
. The central part of the SDRAM is the memory cell array
100
, and typically, there are two or four memory cell arrays (called bank) in one SDRAM chip. A bit is stored in an individually addressable unit memory cell, which is arranged together with row address and column address in a specific bank. Therefore, memory address provided by memory controller received from CPU or VGA memory request is divided into two parts, a row address and a column address. Row address decoder
110
and column address decoder
120
are used to decode corresponding addresses. These two addresses are read into row address buffer and column address buffer on the assertion of /RAS and /CAS signals.
All memory commands are referred to the rising edge of clock from clock generator
130
when /CS signal is asserted and defined by the status of /RAS, /CAS and /WE signals. Command decoder and control logic
150
manages proper operation to each internal block based on the mode register
140
status and the input memory command. The operation mode, /CAS latency, burst type and burst length are defined by mode register set (MRS) command during DRAM initialization sequence.
For a memory read access, Data control circuit
160
outputs the stored data from addressed memory cell array
100
; therefore, the sense amplifier amplifies the data and transfers them into the input/output buffer
180
based on latch circuit
170
which is determined by /DQM signal. Then the buffer supplies data onto the data bus via data pins. For a memory write access, data was latched into input/output buffer from data bus according to /DQM, then amplified by sense amplifier and transferred to the addressed memory cell array and stored.
From the behavior of SDRAM, we could conclude that a basic memory controller carries out three jobs: decoding the address from CPU or VGA memory requests into a row and a column address, activating control signals (/CS, /RAS, /CAS, /WE, /DQM) correctly, and transferring write data and accepting read data.
Depended on design, there are many ways to control SDRAM. For example, after a memory access is completed, whether precharging this activated page or not will affect the control manner. An activated page is that an addressed row already completed the activate command (ACT) and stayed at row active state waiting for read/write command. Whenever performing pre-charge to this page, any subsequent memory access must issue activate command before issue read/write command even when they are located on the same page. On the other hand, if this page still stays on activated state, we can save the activating time when the subsequent access is a hit cycle. A hit cycle is that the current memory access located on an activated page. However, there is penalty if the following is a miss cycle since we have to pre-charge activated page first then activate desired page. A miss cycle, conversely, is current memory cycle located on the same bank but a different row from the activated page. In addition, there is also a row empty cycle that is a memory access cycle locates on the bank whose rows are all at idle state. For a row empty cycle, the target page has to be activated first before issuing a RD/WRT command.
One important feature of SDRAM to improve the data bandwidth is SDRAM supports random row read/write access; that is multi-bank Ping-Pong access. It means that a bank can be activated or pre-charged while another bank is being accessed. By this feature, we can infer that under the avoidance of conflict on command bus and data bus, we can pipelined or overlapped issue commands to different banks to utilize bus more efficient. It is a predominant factor to improve memory access performance because data bandwidth depends heavily on the utilization of data bus.
SUMMARY OF THE INVENTION
In a highly integrated system, memory subsystem may be accessed by multiple devices and operated in multitasking mode. Thus, the present invention provides a pipelined method and apparatus for optimized DRAM bus utilization to gain maximum data bandwidth.
After receiving a memory access request, memory request priority arbitration unit determines which request could be granted, then corresponding memory commands are generated depended on current SDRAM internal status. Depending on commands' characteristics, all these commands are divided into background commands and foreground commands then pushed into background queue and foreground queue respectively. Foreground commands include memory read and write commands that concerned with data transfer. Background commands are the other SDRAM commands for initialization, refresh and read/write preliminaries such as pre-charge and activate commands. When a being consumed command in queues satisfies all required conditions, this command is issued onto DRAM bus at earliest allowable time. Since background commands are not data related, they could be issued during the interval that data bus is busy but command bus is free; that is background commands can be overlapped with foreground commands. Therefore, through applying background queue and foreground queue, we can issue memory commands pipelined onto DRAM bus to gain maximum bus utilization. Under best case, background accesses could be hide under foreground accesses and DRAM bus is always doing data transfer. Moreover, this invention can work well not only on conventional SDRAM but also on current mainstream DRAM product such as conventional SDRAM, VC SDRAM and DDR SDRAM.


REFERENCES:
patent: 5918242 (1999-06-01), Sarma et al.
patent: 6295586 (2001-09-01), Novak et al.
patent: 6385708 (2002-05-01), Stracovsky et al.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Pipelined SDRAM memory controller to optimize bus utilization does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Pipelined SDRAM memory controller to optimize bus utilization, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Pipelined SDRAM memory controller to optimize bus utilization will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3077520

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.