Data bus bandwidth allocation apparatus and method

Electrical computers and digital data processing systems: input/ – Intrasystem connection

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C710S029000, C710S060000, C709S241000

Reexamination Certificate

active

06499072

ABSTRACT:

FIELD OF THE INVENTION
The invention relates generally to data bus bandwidth allocation apparatus and methods and more particularly to data bus bandwidth allocation apparatus and methods for allocating bandwidth for memory reads.
BACKGROUND OF THE INVENTION
Data processing systems, such as computers, telecommunication systems and other devices, may use common memory that is shared by a plurality of processors or software applications. For example, with computers, a processor, such as a host processor, may generate data for other processing devices or processing engines and may need to communicate the processed data over a bus. In systems that also employ video processing or graphics processing devices, such systems may employ memory such as frame buffers which store data that is used by memory data requesters such as 3D drawing engines, display engines, graphic interface providing engines, and other suitable engines. Memory controllers use request arbiters to arbitrate among client's memory read and write requests to decide which client will receive data from system memory or from the frame buffer. A memory controller typically processes the memory requests to obtain the data from the frame buffer during memory reads or from an input data buffer (FIFO) associated with the host processor.
A problem arises when the amount of data delivered by the parallel access to system and local memory exceeds the data throughput capacity of the bus(es) providing the data transport from the memory controller to the clients. As such, memory controllers can receive more data from memory than they may be able to output to memory read backbones (e.g., buses) to the plurality of requesters requesting data from the frame buffer or from the host processor. As a result, collisions can occur creating an efficiency problem and potential data throughput bottlenecks. One solution is to add an additional bus to the memory read backbone for peak demand periods when, for example, all requesters are requesting data and their demands can be fulfilled by concurrent read activity to both frame buffer and system memory. However, such a system can be prohibitively costly due to layout complexity on an integrated circuit and may also require three (or more) port data return buffers that can accept read data on all ports every clock cycle. Some known processing systems use dual read buses to facilitate higher throughput from memory reads but would require a third bus to handle the peak bandwidth requirements.
The problem can be compounded when the host system's memory controller returns data in an unregulated fashion, namely, whenever the host processor does not make requests and the full bandwidth of the system memory becomes available. Typically, the memory controller can control the rate at which data is read from the frame buffer but has no control over when and how much data is available from the host processor. As such, there is no control over one data source but such systems typically have the ability to control the amount and frequency of data the memory controller obtains from the frame buffer memory. Such systems use FIFO data buffers to help reduce overflow problems, but even with deep buffers overflowing can not be avoided if the ratio between memory bandwidth and transport bandwidth is high. However, with real time requesters, such as audio and video processors, data can be lost if not processed when made available. Also, known systems, such as video and graphics processing systems, may include memory request sequencers which obtain the appropriate amount of data per request from a frame buffer over one or more channels. In addition, such systems may have a multiplexing scheme which multiplexes data from the host processor with data from the frame buffer so that it is passed to the memory read backbone to the requisite memory requesters. However, known systems typically encounter data collision conditions through such multiplexing schemes when a plurality of requesters are requesting data from the frame buffer and data from another source such as a host processor.
Consequently, there exists a need for a data bus bandwidth allocation apparatus that facilitates suitable bandwidth allocation in a system that has an unregulated bus, such as a bus from the host processor or other source, and a regulated bus such as a memory bus between a memory controller and a frame buffer memory or other requested bus.


REFERENCES:
patent: 5261099 (1993-11-01), Bigo et al.
patent: 5367331 (1994-11-01), Secher et al.
patent: 5940369 (1999-08-01), Bhagavath et al.
patent: 6035333 (2000-03-01), Jeffries et al.
patent: 6263020 (2001-07-01), Gardos et al.
patent: 6324165 (2001-11-01), Fan et al.
patent: 6353685 (2002-03-01), Wu et al.
patent: 405227194 (1993-09-01), None

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Data bus bandwidth allocation apparatus and method does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Data bus bandwidth allocation apparatus and method, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Data bus bandwidth allocation apparatus and method will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2918383

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.