Electrical computers and digital data processing systems: input/ – Interrupt processing – Interrupt queuing
Reexamination Certificate
1998-11-03
2001-02-06
Ray, Gopal C. (Department: 2781)
Electrical computers and digital data processing systems: input/
Interrupt processing
Interrupt queuing
C710S266000
Reexamination Certificate
active
06185652
ABSTRACT:
The present invention relates generally to interrupt mechanisms and, particularly, to high-performance interrupt mechanisms that do not require the use of realtime-interrupt technology.
BACKGROUND OF THE INVENTION
Conventional Interrupt Systems
Computing systems that were designed and built during the 50's and early 60's used a “polling” scheme to track internal and external system events. The polling scheme is simple and effective, but it has two major drawbacks. First, it consumes too much processing power for event polling, thereby slowing the system's performance. Second, the ability to detect an event is highly dependent on the frequency of polling for that particular event. If the polling frequency is high, valuable processing power is wasted on the polling and less power is left for other tasks. If the polling frequency is too low when compared to the frequency of an event that the processor is trying to detect, then the processor may miss some incidents of the event.
Interrupt mechanisms were invented to improve system performance and have been widely used in the past three decades. There are two different types of interrupt architecture: “realtime interrupt architecture” and “non-realtime interrupt architecture”. Each architecture can be evaluated in terms of the classic measures of interrupt-response time and interrupt latency, each of which affects system throughput and overhaul system performance.
Generally, a “realtime interrupt architecture” implementation (referred to hereafter as a “realtime system”) offers better system performance than a “non-realtime interrupt architecture” implementation (non-realtime system) but is also more costly to build. A realtime system is typically more expensive than its non-realtime cousin because the realtime system requires a faster processor. A realtime system is also more complex because it requires a “realtime-operating-system” to fully realize the advantages of its realtime interrupt architecture. Despite its higher cost, the realtime architecture provides a level of performance that is far superior to that of its non-realtime counterpart.
A realtime system is defined in terms of its response time in detecting and servicing an interrupting event. In general, a realtime system must be capable of detecting and servicing the event well within a pre-defined amount of time. Its reaction to an event must be quick enough that no events will be missed, which generally implies a requirement for a fast processor.
There is no clear definition of a “non-realtime” system. Generally speaking, any slow system that imposes large interrupt latency or is slow to react to the interrupt events is characterized as a non-realtime system. Typically, such systems are less complicated than realtime system and, as a result, are less expensive to build.
Each type of architecture is further separated into a “blocking interrupt architecture” sub-type and a “non-blocking interrupt architecture” sub-type. In general, a blocking interrupt architecture can be implemented in the command-issuing-processor (typically referred to as the master) or in the command-receiving-processor (typically referred to as the slave). The blocking mechanism blocks the subsequent operation of the interrupting device pending handling of the interrupt event. This scheme simplifies the handshaking protocol between the interrupt-producer and the interrupt-server. For example, in a blocking-interrupt architecture a processor sends several commands to another device, which can be a general-purpose-processor, or it can be a special-purpose-processor. The command-receiving-processor (the slave) executes the first command. When the operation is completed, it sends a signal to interrupt the command-issuing-processor (the master). Meanwhile, it still has commands waiting for execution. However, the subsequent executions are blocked by the pending interrupt. In other words, the blocking-mechanism prohibits other operation until the first pending-interrupt has been serviced and processed. Because the subsequent operations are blocked, the current state of the event and its associated information can be preserved until the interrupt-server has time to process it. Thus, simplified hardware and software implementation can easily satisfy the simple handshaking requirements.
On the other hand, a non-blocking-interrupt architecture allows the command receiving processor (the slave) to keep running without stalling. Thus, the non-blocking scheme provides a level of performance that is superior to its blocking cousin. However, serious drawbacks are inherent with the non-blocking architecture. Since the interrupt is not blocked, the interrupt-server needs to be much faster than the interrupt-producer. Otherwise, if the interrupt generation speed is higher than the interrupt serving speed, some interrupts will be lost.
Even if the interrupt-server is fast enough to process all interrupt events, it must provide temporary storage large enough to buffer the information (e.g., interrupt state) associated with each interrupt. The more pending interrupts that are allowed, the bigger the size of the required temporary storage.
To reduce the implementation cost of a non-blocking implementation, temporary storage elements can be implemented as part of the main-memory, but this will slow system performance. Temporary storage can also be implemented with fast hardware registers, but this implementation is quite expensive. Furthermore, since interrupts are not blocked in a non-blocking implementation, several interrupts that occur at the same time will be difficult to prioritize and serve in a timely manner. This imposes serious limitations on the non-blocking architecture, and greatly reduces the usefulness of the non-blocking interrupt scheme.
To solve the above-mentioned problems of the non-blocking architecture, a software interrupt-counter (or interrupt queue) can be used to keep track of the interrupt events. However, even if a software counter is used the CPU (interrupt-server) still must respond to the interrupt quickly enough to avoid missing any events. The benefit of using a non-blocking architecture is greatly reduced in such an implementation because some of the processor's power is wasted on the task of updating the software-counter.
Another solution to problems of the non-blocking architecture is to use devices that can stack two or three interrupts, with each interrupt having a set of associated status registers, so that the information that belongs to each interrupt is preserved until the interrupt is serviced. This is an expensive approach. The number of required status-register sets grows rapidly if a large mount of stacking interrupts are allowed. Thus, this approach is only practical for a small tack as it is very expensive to implement a large number of hardware register sets.
Present Problem: Insufficiency of Conventional Interrupt Architectures for High Speed Processors Executing Small Tasks
A goal of the present invention is to provide interrupt services for an ASIC called NorthBay™ (NorthBay is a trademark of Mylex, Corp), which provides support services for a RAID controller (RAID is an acronym for “Redundant Array of Independent Disks”). A RAID controller coordinates reading and writing in a RAID system, which comprises multiple independent disks on which data is stored, typically redundantly. To ensure data fidelity the RAID controller generates a parity value for each data record written to the RAID system; the parity value is subsequently stored in the RAID system. The parity record is used whenever the RAID controller has determined that one of the disk arrays has been corrupted, then the parity is used to regenerate the original data record.
Among other things, the NorthBay ASIC implements a fast special-purpose-processor that computes the parity values used in the RAID system. The data for which the NorthBay ASIC is to compute parity is specified by the RAID controller's CPU in response to host disk transactions. It is anticipated that the NorthBay ASIC
Shek Edde Tang Tin
Stubbs Robert E.
Flehr Hohbach Test Albritton & Herbert LLP
International Business Machin es Corporation
Ray Gopal C.
LandOfFree
Interrupt mechanism on NorthBay does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Interrupt mechanism on NorthBay, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Interrupt mechanism on NorthBay will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2612394