Scalable interruptible queue locks for shared-memory...

Electrical computers and digital data processing systems: input/ – Access locking

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

Reexamination Certificate

active

06473819

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Technical Field of the Invention
The present invention relates generally to data processing and in particular to shared-memory multiprocessors. Still more particularly, the present invention relates to scalable interruptible queue locks for a shared-memory multiprocessor and a method of operation thereof.
2. Description of the Related Art
Operating system kernels require efficient locking primitives to enforce serialization. Spin locks and queue locks are two common serialization mechanisms. In addition to scalability and efficiency, interruptability is a desired trait. Because of atomicity requirements, a thread may have to raise its priority level before entering a critical section that manipulates memory. Additionally, enabling the thread to be interrupted while it is waiting for the lock increases the responsiveness of the system to interrupts.
A spin lock is a simple construct that uses the cache coherence mechanism in a multiprocessor system to control access to a critical section. A typical spin lock implementation has two phases. In the spin phase, the waiting computation agents, or threads, spin on a cached copy of a single global lock variable. It should be noted that in the context of operating system (OS) kernels, there is generally a one-to-one correspondence between computation agents, or threads, and processors. In the compete phase, the waiting computation agents all try to atomically modify the lock variable from the available to the held state. The one computation agent that succeeds in this phase has control of the lock; the others go back to the spin phase. The transition from the spin to the compete phase is initiated when the lock holder releases the lock by marking the lock variable as available.
Spin locks have two main advantages: they require only a few instructions to implement and they are easily designed to be interruptible. The main disadvantage of spin locks is that they do not scale well. The compete phase can cause significant contention on the system buses when a large number of computation agents simultaneously attempt to acquire the lock. Spin locks are thus suitable only for lightly contended locks. In addition, since the lock is not necessarily granted in first in first out (FIFO) order, spin locks are typically not fair.
In a queue lock, computation agents queue up to acquire the lock. The lock holder releases the lock by granting it to a computation agent at the head of the queue. If there are no computation agents in the queue, the lock is simply marked available to the next computation agent that tries to acquire it. Queue lock implementations typically involve two phases. In the enqueue phase, a computation agent joins the queue by atomically updating the queue data structure. In the spin phase, queued computation agents spin waiting for the lock to be granted. In contrast to spin locks, the computation agents in a queue lock spins at a distinct memory locations that typically map to distinct cache lines. A lock holder notifies that the lock is available by updating a single computation agents spin location. Since the computation agents spin on distinct memory locations, the lock holder wakes up only one computation agent when it releases the lock.
Queue locks are generally more complicated to implement than spin locks. Their main advantage is that they scale well. Unlike spin locks where a lock release causes a free-for-all among the waiting computation agents, at most one computation agent is woken up in a queue lock. This makes them particularly suited for heavily contended locks. Queue locks can enforce fairness by having the queue data structures preserve the order in which computation agents enqueue themselves. The main disadvantage of queue locks over spin locks is the increased number of memory operations caused by the enqueue-spin-wakeup cycle. For lightly contended locks, these extra operations can significantly increase the time it takes to acquire a lock.
An interruptible lock is one that can handle interrupts between the time when a computation agent expresses a desire to acquire a lock and the time that it actually acquires it. Depending on the contention for a lock and the time spent inside the critical section it controls, a significant period of time may elapse between when a computation agent begins the process of acquiring the lock and when it finally gets control. In order to preserve atomicity of accesses to key data structures, the system may enforce the restriction that the critical section be entered only at a high interrupt request level (IRQL). The priority level is typically an attribute of the operating environment wherein only interrupts above a certain priority will be serviced. On many systems, this is referred to as the IRQL. The ideal procedure for handling this situation is to have a computation agent wait for the lock at a lower IRQL and raise the IRQL only after the computation agent gets control of the lock.
TABLE 1
Spin lock
begin:
while (lockvar == held);
[temp = lockvar; lockvar = held]
ATOMIC
if (temp == available)
  return
else
goto begin
Consider the spin lock algorithm shown above in Table 1. Table 2 below illustrates a mechanism for incorporating interruptability in this algorithm. The idea is to raise the IRQL just prior to entering the compete phase. If the attempt is successful, the lock would have been acquired at the higher IRQL. If the attempt fails, i.e., another computation agent has acquired the lock, the IRQL is restored to its original level. This ensures that any interrupt that would have been serviced at the original IRQL will continue to be serviced while the computation agent waits for the lock in the spin phase.
TABLE 2
Interruptible spin lock
oldirql = GetIRQL( );
begin:
while (lockvar == held);
SetIRQL (HIGH);
[temp = lockvar; lockvar = held]
ATOMIC
if (temp == available)
  return
else {
  SetIRQL (oldirql);
  goto begin;
}
Making queue locks interruptible is not as simple as in the case of spin locks. The problem lies in the fact that in a queue lock, a computation agent can be granted the lock at any time after it joins the queue. Contrast this with a spin lock where a computation agent knows that it will not get control of the lock unless it initiates the compete phase. The straightforward approach of raising the IRQL after a queue lock has been acquired creates a window of vulnerability between when the lock is acquired and the IRQL is raised. During this transition period, a deadlock condition may occur. For instance, consider a low priority level external interrupt that occurs within this window. Furthermore, in the process of servicing this interrupt, let it be necessary to acquire the queue lock under consideration. The deadlock situation is now apparent. In order for the interrupt service handler to obtain the lock, the lock needs to be released. However, in order for the lock to be released, the interrupt service handler must finish, enabling the original lock acquire-release cycle to complete. Thus, a more sophisticated mechanism is required.
SUMMARY OF THE INVENTION
It is therefore an object of the present invention to provide an improved multiprocessor system.
It is another object of the present invention to provide scalable interruptible queue locks for shared-memory multiprocessors and a method of operation thereof.
To achieve the foregoing objects, and in accordance with the invention as embodied and broadly described herein, a method for a computation agent to acquire a queue lock in a multiprocessor system that prevents deadlock between the computation agent and external interrupts is disclosed. The method provides for the computation agent to join a queue to acquire a lock. Next, upon receiving ownership of the lock, the computation agent raises its priority level to a higher second priority level. In response to a receipt of an external interrupt having a higher p

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Scalable interruptible queue locks for shared-memory... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Scalable interruptible queue locks for shared-memory..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Scalable interruptible queue locks for shared-memory... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3000094

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.