Method of peer-to-peer mastering over a computer bus

Electrical computers and digital data processing systems: input/ – Intrasystem connection – Bus access regulation

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C710S107000

Reexamination Certificate

active

06223238

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
The invention relates generally to information processing systems, such as a personal computer (PC). More particularly, the invention relates to processing transactions in a computer system having multiple bus architecture.
2. Background of the Related Art
Modern computer systems, such as personal computers (PCs), process an enormous amount of information in a relatively short time. To perform its sophisticated functions, a computer system typically includes a main processor, memory modules, various system and bus control units, and a wide variety of data input/output (I/O) devices. Typically, these computer devices communicate control and data signals in accordance with a predetermined signal protocol. However, with the employment of a multiple bus architecture, these devices often communicate across a plurality of bus protocols and bridging devices. A bridging device performs protocol conversion between two buses, thereby allowing each device involved in a transaction to know how and when the other device is going to perform a particular task.
A transaction over a particular bus normally involves a requesting device (the “requester”) and a responding device (the “target”). The requester requests the transfer of data or a completion signal from a target in the system. The request typically includes a number of control bits indicating the type of request and an address of the desired data or device. The target, in turn, responds to the transaction by sending a completion signal along with any data if necessary. With the presence of various devices acting as requesters and targets in the system, bus protocols often include the ability to handle multiple transactions among multiple devices concurrently. One example of such a bus is the pipelined bus which allows requests by various requesters to be pending (i.e., unfulfilled) over the bus at the same time. The incorporation of separate data and address buses makes this possible. In a pipelined transaction, a requester sends a request on the address bus, and a target returns a reply on the data bus. Multiple requesters may send multiple requests over the address bus, and multiple targets may respond in the same order of the requests over the data bus. In a special pipelined bus, commonly referred to as the split transaction bus, the order of the responses does not have to occur in the same order as their corresponding requests. Each transaction is tagged so that the requester and the target keep track of the status of the transaction. This characteristic permits maximum utilization of the pipelined bus by effectively increasing bus bandwidth. This advantage, however, is obtained at a cost of experiencing a higher latency than when a request is held during the pendency of a transaction.
The Pentium II or Pentium Pro processor is an example of a processor which supports a pipelined bus, commonly referred to as the P6 bus. The P6 bus includes a 64-bit external data bus and a 32- or 36-bit address bus. The speed of the P6 bus may be 66 or 100 MHz, and the processor clock rate may be two, three, or four times the speed of the bus. The P6 bus employs “packet” transmission to transmit data the same way that a network transmits packets. A data packet is known as a chunk which may be up to 64 bits. The P6 bus supports split transactions. Accordingly, a P6 processor sends an address and then releases the bus for use by other bus requesters while waiting for the target device (e.g., a main memory) to respond. When the target is ready to respond, the target returns the requested data over the data bus in 64-bit packets.
The maximum data transfer that is supported by the P6 processor bus is four 64-bit wide transfers, commonly referred to as a “cache line” transfer. As noted above, the P6 processor supports split transactions. This feature is often characterized as a “deferred response” whereby a target device defers its response to a request by a requester. A deferred response allows the P6 bus to be freed to execute other requests while waiting for the response from a device with relatively long latency. A single P6 processor may have up to four transactions outstanding at the same time.
FIG. 1
is a functional block diagram of an exemplary computer hardware layout having a multiple bus architecture. As shown in
FIG. 1
, a main processor CPU
110
is connected to a Host bus
120
. A Host bridge
130
connects the Host bus
120
to a secondary bus PCI1 bus
140
. One or more input/output devices IOD1
142
is connected to the PCI1 bus
140
. The Host bridge
130
supports communication between PCI devices, e.g., IOD1
142
, and devices present on the Host bus
120
or elsewhere in the system. Another Host bridge
150
is often employed to connect another PCI2 bus
160
to the Host bus
120
. Moreover, other I/O devices, e.g., IOD2
162
, are connected to the PCI2 bus
160
. Similarly, the Host bridge
150
supports communication between PCI devices, e.g., IOD2
162
, and devices present on the Host bus
120
or elsewhere in the system.
A bus transaction over the Host bus
120
is often in the form of a read request issued by a requester device. For example, in single chunk requests, the IOD1
142
on the PCI1 bus
140
may issue a read request to the IOD2
162
on the PCI2 bus
160
. The purpose of the read request may be to obtain data being processed by or available at the IOD2
162
. The Host bridge
130
receives the read request from the IOD1
142
, decodes the address from the read request, and issues a single chunk (i.e., 64 bits) read request over the Host bus
120
to the PCI2 bus
160
. A cache line read may not be issued to the PCI2 bus
160
because a PCI bus does not support a cache line read (i.e., four 64 bits). A cache line read on a PCI bus triggers a retry request after the target transfers one or more words to the requester. The retry request is often triggered because a PCI device may need to execute other requests before servicing the read request. A retry request is problematic over a PCI bus because speculative reads are not permitted. A speculative read is a read operation (usually in response to a retry request) of an already read data. Hence, in response to the single chunk read request by the Host bridge
130
, the Host bridge
150
detects the request and issues a single chunk read request to the IOD2
162
on the PCI2 bus
160
.
There are several inherent inefficiencies with single chunk requests. A single chunk request does not utilize a host bus efficiently. In view of its limited bandwidth occupancy, a single chunk request slows down the computer system. Moreover, unless retried, a single chunk request ties up the host bus until fulfilled. A defer transaction is not an option for PCI devices because, as a bus requester, a host-PCI bridge does not support deferred transactions.
Therefore, there is a need in the technology to make a more efficient use of a host bus, e.g., the P6 bus. The utilization of full bus bandwidth on the host bus should be accommodated on secondary buses, e.g., the PCI bus. More particularly, cache line requests, which are accommodated by the host bus, should be supported by secondary buses.
SUMMARY OF THE INVENTION
To overcome the above-mentioned problems, the invention provides a system for executing peer-to-peer mastering over a host bus in a computer system. The system includes a host bridge which performs deferred bus transactions in a computer system having a multiple bus architecture. The host bridge supports communication among multiple input/output devices (IODs) without interrupting or involving the main processor.
In one embodiment of the invention, a method of communication between a requester and a target device is provided. The method comprises the act of establishing a handshake between an IDE Controller and a host master, and issuing a request by the host master over a host bus. The method further comprises the act of acknowledging the request, and transmitting a deferred response to the requester.


REFERENCES:
paten

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Method of peer-to-peer mastering over a computer bus does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Method of peer-to-peer mastering over a computer bus, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method of peer-to-peer mastering over a computer bus will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2461402

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.