Electrical computers and digital data processing systems: input/ – Input/output data processing – Direct memory accessing
Reexamination Certificate
2000-04-28
2001-11-13
Dharia, Rupal (Department: 2181)
Electrical computers and digital data processing systems: input/
Input/output data processing
Direct memory accessing
C710S009000, C710S023000, C710S024000, C710S026000, C710S027000, C710S031000, C709S212000
Reexamination Certificate
active
06317799
ABSTRACT:
BACKGROUND OF THE INVENTION
1. Field of the Invention
This invention relates generally to memory access and, more particularly, to a destination controlled remote direct memory access (“DMA”) engine.
2. Description of the Related Art
One important component in a computer's performance is the efficiency with which it accesses memory. Most, if not all, instructions that a computer executes require the computer's processor to either write or read data from memory. Thus, the more efficiently the computer accesses data in memory, the better the computer's overall performance. It is also therefore important that a computer both read from and write to memory efficiently since both operations limit the computer's performance. Gains in performance can consequently be obtained by improving the efficiency of either reading or writing.
FIG. 1
illustrates a particular computer's prior art memory and input/output (“I/O”) subsystem
10
. The subsystem
10
is constructed and operates in accord with an industry standard known as the Peripheral Component Interface (“PCI”) specification. The subsystem
10
includes a memory
12
that receives and transmits data over a host bus
14
. To facilitate data transfer during I/O operations, the subsystem
10
includes a host/PCI bridge
16
between the host bus
14
and a PCI bus
18
. The PCI bus
18
provides a communications mechanism permitting a variety of peripheral components (not shown) to conduct their business without slowing operations on the host bus
14
.
The peripheral components in the subsystem
10
are I/O devices, such as a monitor, a keyboard, a mouse, or a printer, that interface with the PCI bus
18
through I/O adapters
20
. As used hereafter, the term “I/O adapter” shall mean either an I/O device or an interface to an I/O device. As shown in
FIG. 1
, there are several I/O adapters
20
, each of which must transact its business on the PCI bus
18
, but only one can do so at a time. The individual I/O adapters
20
arbitrate among themselves and the host/PCI bridge
16
in between transactions to see who will control the PCI bus
18
for the next transaction. Once an individual I/O adapter
20
wins the arbitration and controls the PCI bus
18
, it can access the memory
12
through the host/PCI bridge
16
over the PCI bus
18
and the host bus
14
.
To write data to an I/O adapter
20
, an initiating device (not shown), such as a processor, puts the data on the host bus
14
. The host bus
14
then receives the data and writes it to a write buffer
24
of the host/PCI bridge
16
. The host/PCI bridge
16
then arbitrates for control of the PCI bus
18
and, upon receiving control, writes the data to the I/O adapter
20
. The host/PCI bridge
16
then relinquishes control of the PCI bus
18
.
To read data from the memory
12
, an individual I/O adapter
20
wins control of and then issues a read transaction on the PCI bus
18
. The host/PCI bridge
16
receives the read transaction. Upon receiving the read transaction, the host/PCI bridge
16
signals the I/O adapter
20
to retry at a later time, reserves a read buffer
22
for use in the read transaction, and queues a memory access request to fetch the data from the memory
12
over the host bus
14
. The I/O adapter
20
then relinquishes control of the PCI bus
18
. When the host/PCI bridge
16
receives the data, it writes the data in the reserved read buffer
22
. The I/O adapter
20
, in the meantime, periodically retries getting the data from the host/PCI bridge
16
, each retry requiring the I/O adapter
20
to win control of the PCI bus
18
. Eventually, the host/PCI bridge
16
has the data in its read buffer
22
. The I/O adapter
20
then receives the data from the host/PCI bridge
16
whereupon the host/PCI bridge
16
releases the reserved read buffer
22
and the I/O adapter
20
relinquishes control of the PCI bus
18
.
Thus, there are at least two technological problems with the structure and operation of the subsystem
10
in FIG.
1
. First, there is a great disparity between reads and writes for the I/O adapters
20
in the efficiency with which the resources of the subsystem
10
are used. Second, the design does not scale well in the sense of adding I/O adapters
20
and PCI buses
18
and
28
to expand the I/O subsystem.
More particularly, for the read transaction, a read buffer
22
must be reserved for the entire read transaction. Also, there are many more arbitrations for control of the PCI bus
18
for reads than there are for writes. This disparity is compounded for a read by an I/O adapter
26
by the necessity to operate over the PCI bus
28
and through the PCI/PCI bridge
32
. When the number of I/O adapters
20
and
26
performing reads exceeds the number of available read buffers
22
in the bridges
16
and
32
, additional latency is incurred before the bridges
16
and
32
can even forward the read requests to the host bus
14
. When additional PCI/PCI buses
28
are added to expand the I/O subsystem, latencies are accumulated since each bridge
32
must reserve a read buffer
22
from its parent bridge
16
, competing with all other bridges and I/O adapters on the PCI/PCI bridge
32
's primary bus. For a single read to complete, a read buffer
22
in each bridge
16
and
23
is consumed and, when a read buffer
22
is not available, the transaction stalls. Since each bridge
16
and
32
has a limited number of read buffers
22
, the subsystem
10
does not scale well.
The present invention is directed to overcoming, or at least reducing the effects of, one or more of the problems set forth above.
SUMMARY OF THE INVENTION
The invention, in one embodiment, is a method for accessing memory. The method includes programming a remote DMA engine from a destination; accessing data in the memory with the DMA engine, the DMA engine operating as programmed by the destination; and transferring the accessed data to the destination.
REFERENCES:
patent: 4371932 (1983-02-01), Dinwiddie, Jr. et al.
patent: 4805137 (1989-02-01), Grant et al.
patent: 4878166 (1989-10-01), Johnson et al.
patent: 4901232 (1990-02-01), Harrington et al.
patent: 5003465 (1991-03-01), Chisholm et al.
patent: 5175825 (1992-12-01), Starr
patent: 5404463 (1995-04-01), McGarvey
patent: 5475860 (1995-12-01), Ellison et al.
patent: 5634099 (1997-05-01), Andrews et al.
patent: 5881248 (1999-03-01), Mergard
patent: 5890012 (1999-03-01), Poisner
patent: 5954802 (1999-09-01), Griffith
patent: 5968143 (1999-10-01), Chisholm et al.
patent: 5968144 (1999-10-01), Walker et al.
patent: 6000043 (1999-12-01), Abramson
patent: 6081851 (2000-06-01), Futral et al.
Bell D. Michael
Futral William T.
Blakley Sokoloff Taylor & Zafman LLP
Dharia Rupal
Intel Corporation
LandOfFree
Destination controlled remote DMA engine does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Destination controlled remote DMA engine, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Destination controlled remote DMA engine will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2582773