Intermediate buffer control for improving throughput of...

Electrical computers and digital data processing systems: input/ – Access arbitrating

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C710S039000, C710S052000, C370S429000

Reexamination Certificate

active

06584529

ABSTRACT:

FIELD OF THE INVENTION
The invention is generally related to electronic bus architectures and like interconnects, and in particular, to the access of a shared resource such as a memory by multiple requesters using split transactions.
BACKGROUND OF THE INVENTION
As computers and other electronic devices are called upon to handle increasingly difficult tasks, greater and greater performance demands are placed on such devices. Of particular concern in many devices, for example, is the communications speed, or “bandwidth”, between interconnected devices, as the speed in which information is transmitted between such devices can have a significant impact on the overall performance of an electronic system.
One manner of coupling multiple electronic devices together is through the use of a “bus”, an interconnection that is often used to permit a number of electronic devices to access a shared resource such as a memory. In a bus architecture, every device coupled to a bus receives the signals broadcast on the bus, with only the intended destination device (often indicated by an “address” broadcast over the bus) paying attention to the broadcasted signals. Any non-addressed devices simply ignore the broadcasted signals.
A split transaction bus is a particular form of bus architecture where information transfer is handled using discrete “transactions” issued by requesters, with each transaction including both an address phase and a data phase. During the address phase, typically both the location and size of a data transfer are specified, as well as the type of transaction being performed (e.g., read, write, etc.). For example, the address phase of a transaction might request that a read operation be performed to obtain 16 bytes of information stored at a particular address on a target device. Then, during the data phase, the actual information being requested is transferred from the target device to the requesting device over the bus.
An important characteristic of a split transaction bus is that the address and data phases of a transaction are “split”, or demultiplexed, from one another such that the data phase is not required to be performed at any fixed time relative to the address phase. Splitting the address and data phases often improves the performance of a split transaction bus since other productive work can be performed during the time period between the address and data phases of a transaction. For example, it may take two or three bus cycles after receiving an address phase of a transaction for a target device to transmit requested data over the bus. As such, during the interim, other transactions can be initiated over the bus, rather than having the bus stand idle waiting for the requested data to be transferred (a process known as “pipelining”). The maximum performance available from any bus is obtained when useful information is being transmitted on every cycle, and as such, the ability to split transactions can often assist in minimizing the amount of cycles that go unused, and thus maximizing bus efficiency.
One specific application of a split transaction bus is an Accelerated Graphics Port (AGP) bus, which is a high performance interconnect used to transmit graphical data between a graphical accelerator (functioning as a requester) and a system memory (functioning as a shared resource) in a computer. Many graphical applications, in particular 3D applications, have relatively high memory bandwidth requirements, and an AGP bus assists in accelerating the transfer of graphical data in such memory-intensive applications.
AGP buses support a number of different types of transactions, including high priority reads, low priority reads, high priority writes, low priority writes, etc. Transaction requests for each type of transaction (representing the address phases of the transactions) are typically placed on a common request queue, and the requests are ordered in the queue based on predetermined rules. Each transaction type, however, typically has its own data storage area, or “intermediate buffer”, within which data is temporarily stored when being transferred between an AGP device and the system memory. Given the split nature of the AGP transactions, the data transfers associated with the requests may be performed in different orders, so long as the data transfers for any single type of access are performed in the specified order.
AGP buses are typically interfaced with a system memory through a memory interface that also handles accesses to the system memory by the main processor for the computer, with arbitration functionality included in the memory interface to manage the processor and AGP accesses to the system memory. In many instances, the system is configured to optimize data transfer between the main processor and the system memory, sometimes to the detriment of the AGP bus. For example, for x86 processor architectures, often the main processor and system memory are configured to process relatively “short” transactions of up to 4 clock cycles in length (where a clock cycle represents the smallest unit of time on a bus). However, many AGP transactions are relatively longer, and thus need to be split up into multiple smaller transactions by the memory interface.
In part due to the highly pipelined nature of both the AGP bus and many memory interfaces, a potential source of inefficiency has been identified, where delays, known as “wait states” may need to be inserted on the AGP bus in certain circumstances. In particular, an AGP device is typically interfaced with a memory using an AGP interface coupled to an AGP bus for use in placing requests on the request queue, and in handling the transfer of data between the AGP device and the various intermediate buffers supported for the different available transaction types. A memory interface in turn pulls requests off of the request queue and handles data transfer between the memory and the intermediate buffers. As a result, with this configuration, the memory interface and the AGP interface are typically handling different requests at any given time.
Conventional AGP implementations permit a write transaction to be started on an AGP bus if there is sufficient space in the intermediate buffer associated with the transaction to hold one block (four clocks worth) of data, even if there is not enough free space to store all of the data to be transferred via the transaction. The AGP 2.0 Specification supports three speeds, 1×, 2× and 4×, with a maximum of 64 bytes of information transmitted in a transaction. Thus, with a 32-bit (4-byte) bus, at 1× speed a maximum of 4 blocks may be required for a transfer, and at 2× speed a maximum of 2 blocks may be required for a transfer. It has been found, however, that if a write access is initiated at the availability of the minimum space required to start the transaction, but without the availability of the minimum space required to complete the transaction, wait states may need to be inserted on the AGP bus to delay activity on the AGP bus until the memory interface has freed enough space in the intermediate buffer to complete the AGP transfer into the intermediate buffer.
During the time that wait states are inserted on an AGP bus, no new AGP request transactions will be started by the AGP interface. This is despite the fact that different types of transactions (e.g., read transactions) could otherwise be performed during this time period. Moreover, this problem is exacerbated due to the need to split long AGP transactions into multiple short transactions to access the memory, since multiple short transactions may need to be issued on the memory interface before space is freed in the associated intermediate buffer.
Waiting to start transferring data associated with a transaction in an intermediate buffer until all of the space necessary to store the data is available is not a desirable alternative, since despite the fact that the aforementioned wait state problem would typically not be a concern, performance would still not be optimal because ordering rule

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Intermediate buffer control for improving throughput of... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Intermediate buffer control for improving throughput of..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Intermediate buffer control for improving throughput of... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3119993

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.