Method and apparatus for accelerating input/output...

Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C710S022000, C710S052000, C711S130000, C711S137000, C711S141000, C711S142000, C711S143000

Reexamination Certificate

active

06711650

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Technical Field
The present invention relates to input/output operations in general, and in particular to a method and apparatus for accelerating input/output operations. Still more particularly, the present invention relates to a method and apparatus for accelerating input/output processing using cache injections.
2. Description of the Related Art
Generally speaking, a processor typically controls and coordinates the execution of instructions within a data processing system. Ancillary to the instruction execution, the processor must frequently move data from a system memory or a peripheral input/output (I/O) device into the processor for processing, and out of the processor to the system memory or the peripheral I/O device after processing. Thus, the processor often has to coordinate the movement of data from one memory device to another memory device. In contrast, direct memory access (DMA) transfers are the transferring of data from one memory device to another memory device across a system bus within a data processing system without intervening communication through a processor.
In a typical data processing system, DMA transfers are commonly utilized to overlap memory copy operations from I/O devices with useful work by a processor. Typically, an I/O device, such as a network controller or a disk controller, initiates a DMA transfer from an I/O device, following which transfer the processor is interrupted to inform the processor of the completion of the data transfer. The processor will eventually handle interrupt by performing any required processing on the data transferred from the I/O device before the data is passed to an user application that utilizes the data. The user application requiring the same data may also cause additional processing on the data received from the I/O device.
Many data processing systems incorporate cache coherence mechanisms to ensure copies of data in a processor cache are consistent with the same data stored in a system memory or other processor caches. In order to maintain data coherency between the system memory and the processor cache, a DMA transfer to the system memory will result in the invalidation of the cache lines in the processor cache containing copies of the same data stored in the memory address region affected by the DMA transfer. However, those invalidated cache lines may still be needed by the processor in an imminent future to perform I/O processing or other user application functions. Thus, when the processor needs to access the data in the invalidated cache lines, the processor has to fetch the data from the system memory, which may take up to tens or hundreds of processor cycles per cache line accessed. The present disclosure provides a solution to the above-mentioned problem.
SUMMARY OF THE INVENTION
In accordance with a preferred embodiment of the present invention, a determination is made in a cache controller as to whether or not a bus operation is a data transfer from a first memory to a second memory without intervening communications through a processor, such as a direct memory access (DMA) transfer. If the bus operation is such a data transfer, a determination is made in a cache memory as to whether or not the cache memory includes a copy of data from the data transfer. If the cache memory does not include a copy of data from the data transfer, a cache line is allocated within the cache memory to store a copy of data from the data transfer and the data are copied into the allocated cache line as the data transfer proceeds. If the cache memory does include a copy of the data being modified by the data transfer, the cache controller updates the copy of the data within the cache memory with the new data during the data transfer.
All objects, features, and advantages of the present invention will become apparent in the following detailed written description.


REFERENCES:
patent: 4504902 (1985-03-01), Gallaher et al.
patent: 5802576 (1998-09-01), Tzeng et al.
patent: 5884100 (1999-03-01), Normoyle et al.
patent: 6584513 (2003-06-01), Kallat et al.
Milutinovic et al , “The cache injection/cofectch architecture: initial performance evaluation” Proceedings Figth International sumposium. 1997. pp. 63-64.*
Milenkovic et al , “Cache injection on bus based multiprocessors” Proceedings. Seventeenth IEEE Sumposium. Oct. 1998. pp. 341-346.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Method and apparatus for accelerating input/output... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Method and apparatus for accelerating input/output..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and apparatus for accelerating input/output... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3219976

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.