Management of PCI read access to a central resource

Electrical computers and digital processing systems: memory – Storage accessing and control – Control technique

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S151000, C711S211000

Reexamination Certificate

active

06557087

ABSTRACT:

FIELD OF THE INVENTION
This invention relates to the processing of read commands in a PCI bus system, and, more particularly, the management of PCI read requests to conduct contiguous read operations which may require multiple transmissions of data.
BACKGROUND OF THE INVENTION
The Peripheral Component Interconnect (PCI) bus system is a high-performance expansion bus architecture which offers a low latency path employing PCI bridges through which a host processor may directly access PCI devices. In a multiple host environment, a PCI bus system may include such functions as data buffering and PCI central functions such as arbitration over usage of the bus system.
The incorporated '074 U.S. Patent describes an example of a complex PCI bus system for providing a connection path between a secondary PCI bus, to which are attached a plurality of hosts, and at least one primary PCI bus, to which are attached a plurality of peripheral devices. The incorporated '074 U.S. Patent additionally defines many of the terms employed herein, and such definitions are also available from publications provided by the PCI Special Interest Group, and will not be repeated here.
Computer system data storage controllers may employ PCI bus systems to provide fast data storage from hosts, such as network servers, via channel adapters and the PCI bus system, to attached data storage servers having storage devices, cache storage, or non-volatile cache storage.
A channel adapter (an adapter coupling a host system to a secondary PCI bus) attempts to read large amounts of data at once from the primary PCI bus, such as 4K bytes of data, and a remote transaction control prefetches the data stream, breaking the data stream into 512 byte groups, as discussed in the incorporated '074 U.S. Patent. In a typical PCI bus protocol, a local prefetch engine of a central resource at the primary PCI bus accesses and prefetches each group of data, usually in “tracks” or “cache lines” of an expected length, such as 128 bytes, continuously in a single read operation, or by contiguous read operations in a DMA (direct memory access) process to a PCI bus adapter of the local bridge. However, if the device supplying the data is unable to fill the read request by accessing the entire 512 bytes in a continuous operation, and another local bridge PCI bus adapter provides a request for the same primary PCI bus, the central resource may flush the read operation and start the new request. For example, the local prefetch read request may be for read bursts starting at a PCI address “x” and read 128 bytes of data. Then, the next PCI bursts resume reading at a PCI address “x+128”. However, if the first read runs out of data and pauses, another read command is given access due to the fairness algorithm of the central resource, the other read operation starting at address “y”. The first read prefetch will thus be slowed, and may be unable to complete. The total amount of requested data for the original read command will thus not have been provided. Since not all of the data has been provided, the prefetch of the central resource may restart and reaccess the same data and go through the same process until all of the data has finally been read, and the requests may again be interrupted by other read commands. Alternatively, another host may request access to the same primary PCI bus at an alternate channel adapter, so that the primary PCI bus is requested again via a remote path, with the requests continually interrupting each other.
A latency timer has been employed to limit the burst size where the target device cannot satisfy the entire read request and allows the primary PCI bus to be switched to the next agent's command, while only transferring the data received before the latency timer expired. Again, this results in a need for subsequent read requests, which require repeating the same process until the data has finally been read.
Thus, however conducted, the time required to complete a typical read operation of contiguous data, of, e.g. 512 bytes, in a series of discontinuous read operations, in the context of competing read commands, is thus very lengthy and has the effect of reducing the efficiency of the read operation and reducing the effective bandwidth of the PCI bus system during the discontinuous read operations.
SUMMARY OF THE INVENTION
An object of the present invention is to increase the efficiency of read operations of contiguous data in a PCI bus system, and to lead to a higher effective read bandwidth for the PCI bus system.
In a PCI bus system having at least a primary PCI bus, a PCI read access management system and method are provided for managing read access between two agents providing PCI read requests to conduct contiguous read operations of the central resource at the PCI bus. Dual transaction control logic units are each respectively coupled to a separate one of the requesting agents. An arbitration request connection is provided, coupling the dual transaction control logic units. A PCI read request by one of the agents (e.g., agent A), and recognized by one of the dual transaction control logic units (e.g., unit
1
), to the arbitration request connection, which arbitrates between the transaction control logic units for reserving the primary PCI bus for the one agent (agent A), and the one transaction control logic unit (unit
1
) grants read access to the primary PCI bus for the one agent (agent A) for the contiguous read operations. The one transaction control logic unit (unit
1
) then maintains the reservation by signaling the arbitration request connection until completion of the contiguous read operations.
The reservation is accomplished by the other transaction control logic unit blocking the other agent, preventing it from asserting its request lines at the primary PCI bus, such as by preventing issuance of a grant signal.
The completion of contiguous read operations is identified by the transfer of data read at the primary PCI bus during the contiguous read operations equaling an established byte count, e.g., 512 bytes.
For a fuller understanding of the present invention, reference should be made to the following detailed description taken in conjunction with the accompanying drawings.


REFERENCES:
patent: 5455915 (1995-10-01), Coke
patent: 5471590 (1995-11-01), Melo et al.
patent: 5550989 (1996-08-01), Santos
patent: 5797020 (1998-08-01), Bonella et al.
patent: 5809534 (1998-09-01), Elkhoury
patent: 5862403 (1999-01-01), Kanai et al.
patent: 5903906 (1999-05-01), Pettey
patent: 5933158 (1999-08-01), Santos et al.
patent: 5936640 (1999-08-01), Horan et al.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Management of PCI read access to a central resource does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Management of PCI read access to a central resource, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Management of PCI read access to a central resource will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3059181

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.