Method and apparatus for performing transactions rendering...

Electrical computers and digital data processing systems: input/ – Input/output data processing

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C710S020000, C710S052000, C710S310000

Reexamination Certificate

active

06466993

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to computer systems with input-output (I/O) architectures, and more particularly, and not by way of limitation, to an I
2
O compliant computer system that utilizes concurrent non-blocking queuing techniques to perform transaction rendering between host processors and I/O devices using I/O bus write operations.
2. Description of Related Art
In an I
2
O compliant computer system, a special I/O architecture is used to facilitate portability between operating systems and host platforms. Because the teachings of the present invention may be better described in relation to the I
2
O architecture, a brief overview thereof is provided hereinbelow.
Essentially, the I
2
O architecture uses a “split driver” model which inserts a messaging layer between the portion of the device driver specific to the operating system and the portion of the device driver specific to the peripheral device.
The messaging layer splits the single device driver of today into two separate modules—an Operating System Service Module (OSM) and a Downloadable Driver Module (DDM). The only interaction one module has with another module is through this messaging layer which provides a communication path.
The OSM comprises the portion of the device driver that is specific to the operating system. The OSM interfaces with the operating system of the computer system (which may also be referred to in the art as the “host operating system”) and is executed by the host CPU or processor. Typically, a single OSM may be used to service a specific class of peripherals or adapters. For example, one OSM would be used to service all block storage devices, such as hard disk drives and CD-ROM drives.
The DDM provides the peripheral-specific portion of the device driver that understands how to interface to the particular peripheral hardware. To execute the DDM, an I
2
O Input/Output Processor (IOP) is added to the computer system. A single IOP may be associated with multiple peripherals, each controlled by a particular DDM, and containing its own operating system such as, for example, the I
2
O Real-Time Operating System (iRTOS). The DDM directly controls the peripheral, and is executed by the IOP under the management of the iRTOS.
Those skilled in the art will recognize that a DDM may typically comprise a Hardware Device Module (HDM) that directly interfaces with the peripheral and is responsible for its control and data transfer associated therewith. DDMs can also comprise an Intermediate Service Module (ISM) which is an additional software interface to the HDM. The ISM is often used for filtering, encoding, and decoding messages to the HDM.
In general operation, the communication model used in the I
2
O architecture is a message passing system. When the CPU seeks to read or write to an adapter or peripheral in an I
2
O system, the host operating system makes what is known as a “request”. The OSM translates the request by the host operating system and, in turn, generates a message. The OSM sends the message across the messaging layer to the DDM associated with the peripheral which processes it appropriately to achieve a result. Upon completion of the processing, the DDM sends the result back to the OSM by sending an appropriate message through the messaging layer. It can be appreciated that to the host operating system, the OSM appears just like any other device driver.
By executing the DDM on the IOP, the time-consuming portion of transferring information from and to the peripheral hardware is off-loaded from the CPU to the IOP. With this off-loading, the CPU is no longer diverted for inordinate amounts of time during an I/O transaction. Moreover, because the IOP is a hardware component essentially dedicated to the processing of the I/O transactions, the problem of I/O bottlenecks is mitigated.
The I
2
O architecture also significantly reduces the number of device drivers written on the basis of the split driver model. Typically, peripheral device manufacturers need only write a single DDM for a particular peripheral which can now operate with any host operating system. The vendors of the host operating system need only write one OSM for each class of peripherals, e.g., the network controller class.
As described in the foregoing, the communication model, that is, the message passing system, utilized in I
2
O systems is designed to be operable with a compatible messaging hardware interface for communication between the host and the IOP. One of the common implementations of this interface involves a two-way queuing system supplied by the IOP—an inbound queue to receive messages from the host and other remote IOPs, and an outbound queue to pass messages to the host.
A set of concurrent non-blocking methods which demonstrate superior performance over traditional spinlock methods of multiprocessor synchronization have been developed by Maged M. Michael and Michael L. Scott. These methods allow multiple processors to gain concurrent non-blocking access to shared First In First Out (FIFO) queues with immunity from inopportune preemption and are especially useful for parallel software applications requiring shared access to FIFO queues. Furthermore, these methods demonstrate nearly linear scalability under high contention of critical regions in a multiprocessor environment and are incorporated directly in application software. These methods do not affect processor interrupts and do not require spinlock methods to provide mutual exclusion to a shared critical region.
Under the current I
2
O method, the IOP Controller is disposed on the side of the I/O bus opposite from the Host's perspective. Thus, when a Host wants to render a transaction to the IOP Controller, the Host must first “read” from the inbound free queue, i.e., a list of free addresses local to the IOP, to obtain a message frame address that corresponds to one of the blocks in the IOP Controller's local memory. This requires that the host initiate a “read” over the I/O bus to access the queue located on the same side of the I/O bus as the IOP Controller.
However, in order for a “read” to go through the I/O bus, the write post buffers of the I/O bus must first be drained before the “read” can occur. This is necessary, in order to preserve coherency and consistency. Typically, an I/O bus, e.g., a PCI (Peripheral Component Interconnect) bus, has a predetermined number of write post buffers and a Host CPU releases data to these write post buffers thus freeing the CPU to perform other tasks. In the situation where there is a lot of write activity over the PCI bus (e.g, a master writing in a memory or a CPU writing down to a device), writes are posted to the I/O bus and if a read comes through that bus for any reason, the read may experience high latency waiting for the write post buffers to drain before the read goes through. This problem can occur in either direction. Thus, to overcome this problem, a technique would be desirable to complete transactions between host processors and I/O Devices without incurring I/O Bus Read operations and utilizing concurrent non-blocking queuing techniques.
SUMMARY OF THE INVENTION
The present invention comprises a method and apparatus for rendering and completing transactions between host processors and the IOP controller without incurring read operations over the I/O bus while utilizing concurrent non-blocking queuing techniques.
In accordance with the present invention, a computer system includes one or more hosts coupled via a host bus to each other and a cached host memory, an Input/Output processor providing data to peripheral devices, and an I/O bus (preferably a PCI bus) disposed between the hosts and the Input/Output processor for transfer of information therebetween. An inbound queue structure receives message information from one of the hosts, and an outbound queue structure sends message information from the I/O processor to one of the hosts. Each of the queue structures comprises a pair of FIFOs designated as a free-list buffer and a post-li

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Method and apparatus for performing transactions rendering... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Method and apparatus for performing transactions rendering..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and apparatus for performing transactions rendering... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2998512

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.