Electrical computers and digital processing systems: virtual mac – Task management or control
Reexamination Certificate
1997-02-18
2004-07-20
Banankhah, Majid (Department: 2127)
Electrical computers and digital processing systems: virtual mac
Task management or control
C718S101000, C718S108000, C712S041000, C712S228000
Reexamination Certificate
active
06766515
ABSTRACT:
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to a method of scheduling parallel processes in a distributed, multi-kernel, multiprocessor system, and more particularly to a system and method for scheduling parallel processes with no kernel-to-kernel communication.
2. Background Information
A thread model of program execution has proven to be a viable method for parallel execution of program code both in single and multiprocessor machines. Under the thread model, programs are partitioned (by the user or by a compiler) into a set of parallel activities. The instantiation of each activity during execution of the program code is called a thread; if the program code includes more than one thread, the program is said to be multi-threaded. By partitioning the program code into threads, it is possible to create a more easily maintainable, more easily understood and, possibly, faster program.
The thread abstraction described above is often referred to as a user-level thread. It is the entity that a user will create, using a threads interface, in order to express parallelism in a program. The operating system will provide a unit of scheduling, a virtual processor, to which a user-level thread will be mapped; the mapping may be performed statically, or dynamically when executing. This virtual processor will in turn be mapped to a physical processor by the operating system scheduler. Conceptually, it is useful to distinguish the user-level thread from the virtual processor.
A virtual processor may be a process, such as that provided by traditional UNIX systems, a kernel thread, such as that provided by Mach, or some other abstraction. It is the definition of the virtual processor and the related mapping of user-level threads to virtual processors that defines the performance characteristics of a threads implementation.
There are three basic architectures (and a few variants) that merit discussion:
Many-to-one—The name refers to the mapping of many user-level threads to a single virtual processor. User-level threads mapped to a single virtual processor are also called coroutines.
One-to-one—This architecture maps a single user-level thread to a single virtual processor.
Many-to-many—Typically, multiple user-level threads are mapped to a smaller number of virtual processors. This multiplexing of user-level threads to virtual processors is performed by a second level scheduler within the threads library.
Virtual Processors
Every operating system exports an abstraction that represents the basic unit of scheduling. Under UNIX, for example, the process is the fundamental abstraction that is scheduled; under Mach the kernel thread is the equivalent entity. This abstraction, a virtual processor, is scheduled by the operating system scheduler for execution on available physical processors. It is called a virtual processor because an application may treat it as a processing resource independent of whether it is “backed” by a physical processor.
A traditional UNIX process cannot execute in parallel on a multiprocessor precisely because a virtual processor is a process (a single virtual processor can only be scheduled onto a single physical processor); multiple processes can run concurrently, but if there is only a single runnable process all the processors but one will be idle. A Mach process having multiple kernel threads can run concurrently on multiple processors since the virtual processor is a kernel thread and the process may be comprised of multiple kernel threads.
Many-to-one
Until recently there has not been widespread operating system support for threads. In order to more naturally express concurrency in applications, libraries have been built that support lightweight user-level threads without the benefit of operating system support. While these systems do not allow for parallel execution on multiprocessor hardware, they do allow a programmer to structure an application in a fashion that expresses an application's natural concurrency. Such libraries are examples of the many-to-one model. Multiple user-level threads are multiplexed onto a single virtual processor such as a UNIX process.
The most significant disadvantage of this approach is that it does not allow a single multi-threaded process to take advantage of multiprocessor hardware because there is only one operating-system-visible virtual processor for the entire program. Another disadvantage is that an executing thread will run until either the process's time quantum expires or the thread voluntarily yields the processor. If the running thread blocks for any reason, such as waiting for an I/O request to complete, all the other threads in the process will also be blocked pending completion of the wait, despite the fact that they are independent of the thread awaiting service. Again this is a direct result of having only one virtual processor per program.
It is worth noting that this problem can be ameliorated by the judicious use of alarm signals. Alarm notifications can be scheduled by the threads library such that on delivery of an alarm signal the threads library regains control. It can then choose to schedule an alternate thread for some period of time up to the balance of the process's time quantum.
This architecture is illustrated in
FIG. 1
a
. It depicts three UNIX processes, with each process having one or more threads
2
and each process being allocated a UNIX address space
3
. (In the system shown in
FIG. 1
a
, two of the processes are multi-threaded and the other is single-threaded.) Note that each process, multi-threaded or not, is mapped onto a single process
4
and thus will never utilize more than a single processor at any instance of time. Despite these disadvantages, this model achieved considerable popularity for three main reasons:
Operating systems had not provided any means for expressing concurrency in a program.
The user-level threads are lightweight because the management operations are implemented as procedure calls that do not involve the operating system kernel.
This style of threads library is easy to implement as it requires no operating system modifications.
Examples of this style of architecture are Sun's LightWeight Process library in versions of SunOS prior to 5.0, Apollo's Concurrent Programming Support library in Domain/OS and early versions of Digital's Concert Multithreaded Architecture package.
One-to-one
The one-to-one model represents the simplest form of operating system support for multiple threads of control. It derives its name from the mapping of a user-level thread
2
to a kernel-level thread
5
, the virtual processor in this case, as shown in
FIG. 1
b
. The operating system implements kernel threads as the independently schedulable entities. The creation of a user-level thread results in the creation of a kernel-level thread. The operating system then schedules these kernel threads onto processors and thus effectively schedules the corresponding user-level threads.
There are two significant advantages to this model. It is a simple architecture in that a traditional process scheduler merely has to redefine a virtual processor to be a kernel thread instead of a process. Furthermore, all the scheduling takes place at the kernel level—there is no scheduling of user-level threads and thus no associated complexity. The second and most significant advantage is the potential for a single application to achieve true concurrency on multiprocessor hardware. Multiple virtual processors, possibly from the same process, can be scheduled onto multiple physical processors. Thus, the user-level threads, corresponding to the kernel-level threads that are executing on these physical processors, are executing in parallel. In addition, if a user-level thread blocks while executing a system call, for example a read from a terminal, the corresponding kernel-level thread will block in the kernel; any other user-level threads within the application, however, are not prevented from executing because each of them is associated with a ker
Ananthanarayanan Rajagopal
Bitar Nawaf K.
English Robert M.
Banankhah Majid
Schwegman Lundberg Woessner & Kluth P.A.
Silicon Graphics Inc.
LandOfFree
Distributed scheduling of parallel jobs with no... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Distributed scheduling of parallel jobs with no..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Distributed scheduling of parallel jobs with no... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3221070