Apparatus and method for ensuring forward progress in...

Electrical computers and digital data processing systems: input/ – Input/output data processing – Direct memory accessing

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C710S120000, C711S141000, C711S146000, C711S145000, C711S163000

Reexamination Certificate

active

06636906

ABSTRACT:

FIELD OF THE INVENTION
The present invention relates generally to computer systems. More particularly, the invention relates to a mechanism for ensuring forward progress in coherent I/O systems.
BACKGROUND OF THE INVENTION
A current trend in the design of I/O systems is to use a cache in the host bridge for transferring data to and from I/O devices. The presence of one or more caches in the host bridge means that the host bridge has to participate in cache coherency actions including resolving conflicts when the same cache line is accessed by multiple caches. For example, several I/O devices and processors can access a common semaphore that synchronizes multiple accesses for a shared resource in an atomic manner. It is also common in some I/O systems for two or more disk arrays to store the same data in each disk array. In this manner, the data is available in the event of a failure of one of the disk arrays. By way of another example, portions of data can be used by one device for one purpose and another portion of the same data can be used by another device for another purpose. For instance, the lower bytes of a cache line can be used for one I/O device to control memory bus traffic whereas the upper bytes of the same cache line can be used for another device to also control traffic to the processor bus.
Typically, an I/O system utilizes one or more caches to store data accessed by the I/O devices. The use of multiple caches in the system requires a cache coherency mechanism to ensure that the data in the caches and in main memory remain coherent.
A problem that often arises in a cache coherent I/O system is the increased latency time that is involved in accessing the data when it does not reside in the cache associated with the requesting I/O device. This latency may be attributed to a remote source that has the data and may also be due to the bus protocol used by the requesting I/O device.
For example, in some I/O systems, the Peripheral Component Interface (“PCI”) bus is used as the communication link that interconnects various I/O devices to a host bridge that interfaces with a system memory bus. The PCI bus interface issues a retry command to a requesting I/O device when the host bridge does not have the requested data so that other devices may use the PCI bus while the host bridge obtains the requested data. The requesting I/O device will make a subsequent request for the data and if available, the host bridge will return the data to the device.
It may be possible for the cache to loose the cache line ownership due to a snoop request by another cache before the original request that requested the cache line data came back with the requested data. This may happen due to another device in another cache unit or another processor trying to access the same cache line. When the requested data comes back from the device, the cache controller will re-request the cache line from the system and retry the I/O device. It is possible for two I/O devices under two different cache units to request access to the same cache line data. Immediately after one cache unit obtains ownership of the cache line, a second cache unit issues a snoop request. The first cache unit gives up ownership of the cache line before its I/O device has a chance to get the data. When the I/O device comes back with the requested data, the first cache unit re-requests the cache line which will snoop out the cache line from the second cache unit before the second cache unit services the data to its requesting I/O device. In this situation, the cache line is being transmitted back and forth between the two cache units without either requesting I/O device obtaining the data. This can cause starvation or forward progress problems since neither device will retrieve the data. One of the other problems attributable to this situation is the loss of the interconnect and system memory bandwidth since the same cache line is requested multiple times. Accordingly, there is a need to overcome these shortcomings.
SUMMARY OF THE INVENTION
In summary, the technology of the present invention pertains to a snapshot mechanism that allows an I/O device to obtain the value of cacheable data at the time the read request was made although the value of the data may have changed thereafter. In this manner, the I/O device can make forward progress without incurring delays attributable to obtaining the updated value. The value of the data returned to the I/O device is coherent since the read request occurred before the data was updated.
A multiprocessor computer system embodying the snapshot methodology can have one or more cells connected by a high speed interconnect. Each cell includes one or more processors connected to a memory controller unit that interfaces with the interconnect. The memory controller unit is also connected to a memory bank and an I/O subsystem that includes an I/O bridge unit coupled to a number of I/O devices through one or more I/O buses.
The system memory image of the multiprocessor computer system is distributed through the processors, memory banks, and I/O bridge units of each cell. The processors and the I/O bridge units include a number of internal caches that can store the system memory image in addition to the memory banks. A portion of the system memory can be cacheable by the caches within a cell and/or by the caches of other cells. In order to ensure that the data in the main memory and the caches remain coherent, a cache coherency protocol is used.
An I/O device can request access to cacheable data by making a DMA read request to its associated host I/O bridge unit. The host I/O bridge unit may have one or more cache units that service DMA read requests originating from select PCI buses. If the requested data is not resident within its associated cache unit, the I/O bridge unit seeks the data from the system memory controller that owns the cache line where the requested data resides.
Each cache unit includes a cache controller unit and a cache having a tag, status, and data units. Each cache line in the data unit comprises a predetermined number of bytes (power of 2) and has an associated line in the tag and status units. A tag line includes a set of attributes that uniquely identifies the requesting I/O device and the I/O request, as well as other data. A status line includes a read lock and status bits indicating a number of states associated with the cache line. The read lock indicates whether or not the cache line has been returned to the original requestor. The status bits are used to maintain cache coherency and to assist the snapshot mechanism. One such state used by the snapshot mechanism is a snapshot state which indicates whether the cache line ownership has been given up due to a snoop request before the original DMA read request was serviced. A prefetch state indicates whether or not the cache line was speculatively prefetched without an explicit DMA read request.
When an I/O device requests a cache line that is not resident in the I/O bridge's cache, an entry for the cache line is made in the tag and status units of the cache. In a first embodiment of the present invention, only one I/O request is pending for a particular cache line at a time. Subsequent requests from other I/O devices for the same cache line are not processed until the cache line data is returned to the original I/O device. By storing attributes of the DMA read request that uniquely identify the original requestor and the original request, the snapshot mechanism ensures that the original I/O device will receive the cache line data readily and hence, make forward progress.
When the cache unit receives a DMA read request for a cache line that needs to be fetched from the system memory, an entry is made for the cache line in the tag and status units. The line is then fetched from the system memory by sending a request transaction to the memory controller. The read lock is set in the status unit. The system memory controller returns the data associated with the cache line to the cache unit. When the original device comes

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Apparatus and method for ensuring forward progress in... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Apparatus and method for ensuring forward progress in..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Apparatus and method for ensuring forward progress in... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3157134

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.