Electrical computers and digital processing systems: support – Synchronization of clock or timing signals – data – or pulses
Reexamination Certificate
2000-02-01
2003-07-01
Lee, Thomas (Department: 2183)
Electrical computers and digital processing systems: support
Synchronization of clock or timing signals, data, or pulses
C714S039000, C709S241000
Reexamination Certificate
active
06587955
ABSTRACT:
BACKGROUND OF THE INVENTION
1. Field of Invention
The invention relates generally to methods and apparatus for synchronizing threads in a distributed object system. More particularly, the invention relates to methods and apparatus for providing desired locking behavior to one or more objects.
2. Description of Relevant Art
In general, a thread is a sequence of central processing unit (CPU) instructions or programming language statements that may be independently executed. During the execution of an object-based program, a thread may attempt to execute operations that involve multiple objects. On the other hand, multiple threads may attempt to execute operations that involve a single object. Frequently, only one thread is allowed to invoke one of some number of operations, i.e., synchronized operations, that involve a particular object at any given time. That is, only one thread may be allowed to execute a synchronized operation on a particular object at one time. A synchronized operation, e.g., a synchronized method, is block-structured in that it requires that the thread invoking the method to first synchronize with the object that the method is invoked on, and desynchronize with that object when the method returns. Synchronizing a thread with an object generally entails controlling access to the object using a synchronization primitive before invoking the method.
Since multiple threads must work together on a shared data resource, there must be a mechanism for preventing these threads from destroying data integrity by, for example, writing at the same time to a particular data area. This particular problem has been solved by using synchronization primitives such as locks, mutexes, semaphores, and monitors to control access to shared resources during periods in which allowing a thread to operate on shared resources would be inappropriate. By way of example, in order to prevent more than one thread from operating on an object at any particular time, objects are often provided with locks. The locks are arranged such that only the thread that has possession of the lock for an object is permitted to execute a method on that object.
In real time operating systems, in addition to be able to synchronize on shared resources, threads must be ranked according to their relative importance such that each thread has a corresponding priority. In such an operating system, the higher the priority of a particular thread the more important that the thread execute within a predefined time. By way of example, in a window manager, the thread responsible for cursor movement should have a higher priority than those threads looking after particular windows. Similarly, in an Internet browser, such as Netscape Communicator™, the thread drawing GIFs should be much lower than the thread running the control panel. In another example, in a spacecraft, the guidance and control thread, should be higher than the communications thread. Put more simply, the idea of thread priority is that higher priority threads get CPU time ahead of lower priority threads.
In those systems having prioritized threads, scheduling of the prioritized threads is typically performed by a scheduler under the control of a scheduling algorithm. A typical scheduling algorithm requires that each thread have an integer priority (i.e., one that is bounded above and below by system constants) and that the scheduler works with what is referred to as a run queue.
FIG. 1
illustrates a conventional run queue
100
. The run queue
100
is an ordered queue of n threads (thread0 through threadn) having k priority levels (
1
through k). During scheduling, when an active thread is preempted (i.e., a higher priority thread becomes active), the preempted thread (lower priority) is placed at the head of the queue
100
associated with its particular priority. By way of example, if a thread10 (thread 1 having a priority of 0) is preempted by a thread01 (i.e., a thread 0 having a priority of 1), then the thread1 is placed at the head of the queue associated with the priority 0. In this way, the preempted thread1 is runnable ahead of any thread of the same priority.
In those situations where the highest priority thread is kept from running, what is referred to as priority inversion occurs.
FIG. 2
illustrates an example of bounded priority inversion well known to those skilled in the art. By bounded priority inversion, it is meant that the high priority thread is kept from running for a limited period of time. Referring to
FIG. 2
, suppose a high priority thread T
1
becomes blocked waiting for a particular event to happen. During the period that the thread T
1
is blocked, suppose a low priority thread T
2
starts to run and in doing so obtains (i.e., locks) a mutex for a shared resource at a time t
1
. While the mutex is locked by the low priority thread T
2
, the event that the thread T
1
is waiting for occurs and thereby wakes up the high priority thread T
1
. Priority inversion takes place when the high priority thread T
1
attempts to lock the mutex held by the low priority thread T
2
at a time t
2
. In effect the high priority thread T
1
must wait for the low priority thread T
2
to release the mutex at time t
3
. It is called bounded inversion since the inversion is limited by the duration that the mutex for the shared resource is locked by the low priority thread t
2
.
In some situations, however, a more disruptive priority inversion referred to as an unbounded priority inversion can occur. By unbounded priority inversion it is meant that a high priority thread is kept from running for what could potentially be an unlimited (or unbounded) length of time. For example, suppose a high-priority thread T
1
waits for a resource locked by a low-priority thread T
3
at a time T
1
, and the low-priority thread T
3
waits while a middle-priority thread T
2
executes. In this situation, the high-priority thread T
1
is made to wait while a thread of lower priority (the middle-priority thread T
2
) executes. In this situation, the medium level thread T
2
running prevents the low priority thread T
3
from releasing the lock. All that is required for this to happen is that while the low level thread T
3
has locked the mutex, the medium level thread T
2
becomes unblocked (thereby preempting the low level thread T
3
) which then runs indefinitely.
There are several well-known priority inversion avoidance protocols which have been used extensively to prevent, or at least, substantially reduce the incidence and effect of priority inversions. One such approach to avoiding priority inversion, and more particularly, unbounded priority inversion is referred to as a priority inheritance protocol. As is well known in the art, priority inheritance means that when a thread waits on a mutex owned by a lower priority thread, the priority of the owner is increased to that of the waiter. In the priority inheritance protocol when a thread locks a mutex its priority is not changed. The action takes place when a thread attempts to lock a mutex owned by another thread. In this situation the priority of the thread owning the mutex is raised to the priority of the blocked thread (if higher). When the thread releases the mutex its old priority (i.e. prior to locking this mutex) is restored. This prevents unbounded priority inversion since the low priority thread gets a high priority and thus cannot be pre-empted by medium priority thread.
Another conventional approach used to avoid unbounded priority inversion is referred to as priority ceiling protocol. As well known in the art, priority ceiling means that while a thread owns the mutex it runs at a priority higher than any other thread that may acquire the mutex. In the priority ceiling solution each shared mutex is initialized to a priority ceiling. Whenever a thread locks this mutex, the priority of the thread is raised to the priority ceiling. This works as long as the priority ceiling is greater than or equal to the priorities of any thread that may lock the mutex (hence its name). The action takes place when a th
Foote William
Fresko Nedim
Long Dean Roy Ernest
Beyer Weaver & Thomas LLP
Lee Thomas
Sun Microsystems Inc.
Suryawanshi Suresh K
LandOfFree
Real time synchronization in multi-threaded computer systems does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Real time synchronization in multi-threaded computer systems, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Real time synchronization in multi-threaded computer systems will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3045751