Electrical computers and digital processing systems: virtual mac – Task management or control – Process scheduling
Reexamination Certificate
1998-10-17
2004-08-17
Banankhah, Majid A. (Department: 2127)
Electrical computers and digital processing systems: virtual mac
Task management or control
Process scheduling
C718S102000, C718S100000, C709S203000, C709S241000
Reexamination Certificate
active
06779182
ABSTRACT:
BACKGROUND OF THE INVENTION
1. Field of the Invention
This invention relates to a method and apparatus for efficient process scheduling for a multiprocessor system in a real time environment.
2. Background Art
The multiprocessor system (MP) is characterized by the presence of multiple CPUs or processors which cooperate on common or shared computational tasks. By having multiple processors in a single computer system, one can obtain performance enhancement by allowing many processors to share the computational load or by allowing many smaller tasks to be performed in parallel in separate processors. A multiprocessor system can also improve the system reliability since the failure of one processor will not necessarily lead to a crash of the entire system.
The introduction of multiple processors, however, complicates the resource management and allocation problem since two or more processes execute simultaneously in the kernel on separate processors. Thus, a scheduler module of an operating system needs to decide which processor should execute a process and in its general form, schedule a set of processes on a set of processors with arbitrary characteristics in order to optimize some objective function. This involves the selection of a process for execution from a set of processes.
The basic abstractions in an operating system include “tasks” and “threads.” A task is a unit of resource management and a thread is a single flow-of-control. Each thread possesses a register state and a stack. The system associates with each thread extra state information relating to its schedulability. These include the thread dispatch priority and processor affinity that determines on which processors a thread can execute.
FIG.
1
(A) shows a prior art single queue dispatcher for a multiprocessor system wherein a dispatcher uses an array
101
of dispatch queues, indexed by dispatch priority. In FIG.
1
(A), when a thread is made runnable, it is placed on a dispatch queue, typically at the end, corresponding to its dispatch priority. When a processor switches to a new thread, it always selects the thread at the beginning of the highest priority non-empty dispatch queue. Threads may not change dispatch priority while on a dispatch queue; the thread must be first removed, its dispatch priority adjusted, and then the thread may be placed on a different dispatch queue.
FIG.
1
(B) illustrates a prior art single queue dispatcher where a thread that needs to wait on a synchronization object is placed on a sleep queue
102
associated with the synchronization object. The sleep queue is maintained in dispatch priority order, so that when the synchronization object is released, the highest priority thread waiting for the object is at the head of the sleep queue.
The prior art system shown in FIGS.
1
(A) and
1
(B) uses a single spin schedule lock to protect all scheduling operations. Each scheduler may obtain or release the lock. When the schedule lock is currently held by a processor, other processors that need the schedule lock “spin” on the lock, waiting for access. In particular, whenever the release of a synchronization object makes some thread runnable, the schedule lock is held while placing the thread on the dispatch queue. To prevent interference and delays from interrupt routines, the holder of the schedule lock runs at an elevated interrupt level.
Thus, in the prior art system shown in FIGS.
1
(A) and
1
(B), the contention for a single schedule lock can cause processors to wait in line to make a scheduling decision, resulting in some idle processors. As a result, not only the whole purpose of using multiple processors to achieve faster job processing is defeated, but valuable computer resources go unused.
This situation can be somewhat ameliorated by introducing a multiple dispatch queue system wherein each processor is allowed to have its own dispatch queue and its own schedule lock. Such a multi-queue multi-lock system is shown in FIG.
2
(A) wherein each processor maintains its own dispatcher queue. For example, there is a separate dispatch queue associated with each of processors
1
,
2
, . . . , N in FIG.
2
(A).
In the system of FIG.
2
(A), a processor ready to take a new thread starts searching for a runnable thread in its own dispatch queue. When its own queue is not empty, the processor takes the highest priority runnable thread available off its own queue. However, when its own queue is empty, the processor starts searching other dispatch queues for available threads. Still referring to FIG.
2
(A), if there is an available runnable thread at any other dispatch queue, unless the thread is marked as only runnable on a particular processor, the processor can take the thread from the other dispatch queue and execute the thread. However, prior art dispatchers such as shown in
FIGS. 1 and 2
are not suitable for real time applications.
A real time computing system is designed to deliver a required level of service or processing within a bounded time. Real time computing systems find applications in the areas of virtual reality, factory automation, robotics, teleconferencing, and multimedia broadcast system. These applications are typically “mixed-mode,” that is, they are partitionable into schedulable entities, some of which require real time response. To achieve a bounded time response, time-critical applications require control over their scheduling behavior.
When a multiprocessor system is utilized to support real time applications, the scheduling of processes becomes even more complicated and consequently a real time operating system must be able to provide some real time capability to time critical tasks. Thus, a real time system must be capable of providing immediate response to specific external events and scheduling particular processes to run within a specified time limit after occurrence of an event. A real time system also needs to guarantee that the operating system can schedule a particular process within a fixed time limit.
To service real time high priority threads as rapidly as possible, the system in FIG.
2
(A) can be improved to have a separate real time queue as shown in FIG.
2
(B). In FIG.
2
(B), an additional superqueue is added at a higher level of multiple dispatcher queues to provide system-wide visibility of runnable real time threads. The superqueue is a queue of dispatch queues containing those processor queues that hold unbound real time threads with priority higher than some predetermined threshold priority level. Thus the superqueue lists those processors whose highest priority thread has sufficient priority to be considered a real time thread.
Referring to FIG.
2
(B), a processor first searches for runnable threads in the superqueue. If there is a non-empty high priority queue in the superqueue, the processor takes the highest priority runnable thread off that queue. If the superqueue is empty, the processor checks its own queue. If its own queue is still empty, then the processor goes on to other dispatcher queues for runnable threads. If there is any non-empty dispatcher queue in the system, the processor takes the highest priority runnable thread off that queue and starts executing the thread.
Real time threads would be dispatched by examining the superqueue. Thus, when a processor is selecting a thread to run, it first examines the superqueue and then its own queue.
Although the above solution outlined with respect to FIG.
2
(B) represents an improvement over prior art and offers a straightforward solution to real time multiprocessor scheduling, it presents a critical race problem in scheduling real time threads. For example, a race condition can be created under the following scenario: suppose processors
1
and
2
are notified that there are two real time threads that are runnable in the superqueue. Processors
1
and
2
proceed to check the superqueue to find which processor queues have the two threads. The two real time threads are in, for example, processor
3
and processor
4
's dispatch queues with the real time thread in processor
Banankhah Majid A.
Beyer Weaver & Thomas LLP
Sun Microsystems Inc.
LandOfFree
Real time thread dispatcher for multiprocessor applications does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Real time thread dispatcher for multiprocessor applications, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Real time thread dispatcher for multiprocessor applications will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3361339