Minimizing cache overhead by storing data for communications...

Electrical computers and digital data processing systems: input/ – Input/output data processing – Peripheral monitoring

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C710S019000, C710S052000, C711S111000, C711S118000, C711S147000

Reexamination Certificate

active

06182164

ABSTRACT:

TECHNICAL FIELD
This invention relates to interaction of a peripheral device with a host system, and more particularly to a method and apparatus for storing data used for communications between the peripheral device and the host system into separate locations in memory corresponding to different cache lines to minimize cache data processing overhead.
BACKGROUND OF THE INVENTION
The present invention will be described with an example application for an Ethernet computer network peripheral device which couples a host computer system to a network of computers. In this example application, a CPU of the host computer system and the Ethernet computer network peripheral device share access to a shared memory within the host computer system. In particular, the present invention is described with respect to a list of descriptors that are shared for access between the CPU and the computer network peripheral device as described herein. However, from this example application, it should be appreciated by one of ordinary skill in the art of electronic systems design that the present invention may be practiced for other computer peripheral devices that share access to a shared memory with the host computer system.
Referring to
FIG. 1
, a computer peripheral device
102
may be an Ethernet computer network peripheral device which allows a host computer
104
to communicate with other computers within a network of computers
106
. Such a computer peripheral device
102
receives and transmits data packets on the network of computers
106
. The computer peripheral device
102
, which may be an Ethernet computer network peripheral device, receives and transmits data packets on the network of computers
106
in accordance with standard data communications protocols such as the IEEE 802.3 network standard or the DIX Ethernet standard as is commonly known to one of ordinary skill in the art of Ethernet computer network peripheral device design.
The host computer
104
may be a PC or a workstation, and has a host system which includes a CPU
108
and a shared memory
110
which may be any data storage device found in a PC or a workstation. The CPU
108
further processes a data packet received from the network of computers
106
or generates a data packet to be transmitted on the network of computers
106
. The shared memory
110
is shared between the CPU
108
of the host system
104
and the computer network peripheral device
102
. In a DMA (Direct Memory Access) mode of operation, the computer network peripheral device
102
has direct access to the shared memory
110
within the host system of the computer
104
.
When the computer network peripheral device
102
receives a data packet from the network of computers
106
, that data packet is written into the shared memory
110
directly by the computer network peripheral device
102
for further processing by the host system CPU
108
. The CPU
108
also accesses the shared memory
110
to further process the data packet stored within the shared memory
110
.
Alternatively, the CPU
108
accesses the shared memory
110
to write a data packet to be transmitted on the network of computers
106
. The computer network peripheral device
102
then accesses the shared memory
110
to read the stored data packet in order to transmit such a data packet over the network of computers
106
.
Since both the CPU
108
and the computer network peripheral device
102
access the shared memory
110
, such shared access to the shared memory
110
is coordinated between the CPU
108
and the computer network peripheral device
102
for harmonious interaction between the two devices. Thus, referring to
FIG. 2
, the CPU
108
of the host system
104
and the computer peripheral device
102
share at least one buffer including a first buffer
212
, a second buffer
214
, and a third buffer
216
in the shared memory
110
. A buffer may be used to store a data packet received or to be transmitted over the network of computers
106
.
Access to the shared memory
110
between the CPU
108
and the computer network peripheral device
102
is coordinated by the use of descriptors. Referring to
FIG. 2
, a respective descriptor is within the shared memory
110
for each buffer within the shared memory
110
. A first descriptor
222
corresponds to the first buffer
212
, a second descriptor
224
corresponds to the second buffer
214
, and a third descriptor
226
corresponds to the third buffer
216
. Each descriptor has respective control data and respective status data corresponding to the respective buffer associated with that descriptor.
Thus, the first descriptor
222
has first control data
232
and first status data
242
corresponding to the first buffer
212
. The second descriptor
224
has second control data
234
and second status data
244
corresponding to the second buffer
214
. The third descriptor
226
has third control data
236
and third status data
246
corresponding to the third buffer
216
.
In the prior art, the control data and the status data corresponding to a buffer are typically located in a same memory location. For example, the control data and the status data corresponding to a buffer may be located within a single byte within the shared memory
110
.
The CPU
108
writes the control data corresponding to a buffer to communicate control information to the peripheral device
102
, including for example the completion of processing by the CPU
108
data within the corresponding buffer. In that case, the peripheral device
102
reads the control data corresponding to a buffer to determine the completion of processing by the CPU
108
data within that corresponding buffer. On the other hand, the peripheral device
102
writes the status data corresponding to a buffer to communicate status information to the CPU
108
, including for example the completion of processing by the peripheral device
102
data within the corresponding buffer. Thus, the CPU
108
reads the status data corresponding to a buffer to determine the completion of processing by the peripheral device
102
data within that corresponding buffer.
In the prior art, the control data and the status data of a descriptor are located closely together within the shared memory
110
such that the control data and the status data usually are in the same cache line within the shared memory
110
. For example, the first control data
232
and the first status data
242
may be in a first cache line
252
. The second control data
234
and the second status data
244
may be in a second cache line
254
. The third control data
236
and the third status data
246
may be in a third cache line
256
. Alternatively, more than one descriptor may be in any one cache line with the prior art.
The CPU
108
of the host system when reading data from the shared memory
110
caches data from memory before reading that data from cache. Because the CPU
108
operates in loops, data from memory is cached for faster speed when the CPU
108
repeatedly reads that data from cache. Referring to
FIG. 3
, the peripheral device
102
accesses data in the shared memory
110
via a system interface bus
302
. The CPU
108
reads data in the shared memory
110
via a cache
304
. The cache
304
includes a dirty bit
306
and a valid bit
308
as known to one of ordinary skill in the art of digital system design. A cache and memory controller
310
coordinates the availability of data from the shared memory
110
to the CPU
108
and to the peripheral device
102
.
When the control data and the status data corresponding to a buffer are in the same cache line within shared memory
110
, both the control data and the status data are loaded into cache
304
when any part of data corresponding to that cache line is loaded into cache
304
. In that case, the coordination of the availability of data from the shared memory
110
to the CPU
108
and to the peripheral device
102
requires relatively high cache data processing overhead as illustrated in the table of
FIG. 4
for a typical write-through cache

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Minimizing cache overhead by storing data for communications... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Minimizing cache overhead by storing data for communications..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Minimizing cache overhead by storing data for communications... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2471889

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.