System and method for utilizing dispatch queues in a...

Electrical computers and digital processing systems: virtual mac – Task management or control – Process scheduling

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C712S203000

Reexamination Certificate

active

06834385

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates in general to computer systems and in particular to multiprocessor data processing systems. More specifically, the present invention relates to a system and method for utilizing dispatch queues in a multiprocessing system.
2. Description of Related Art
Conventional multiprocessor data processing systems typically utilize multiprocessing to simultaneously execute multiple programs or multiple parts of the same program. Multiprocessing is a mode of operation in which two or more of a computer's central processing units (CPUs) execute threads in tandem (see Microsoft Press Computer Dictionary, Third Edition, p. 320). A thread is a set of instructions (such as a part of a program) that can execute as an independent unit.
To coordinate the concurrent execution of multiple threads, the operating system in a typical multiprocessing system includes a queue for each of the system's processors, as well as a scheduler and a dispatcher. The scheduler utilizes the queues to schedule the threads for execution, and the dispatcher dispatches threads from the queues for execution on corresponding processors as those processors become available. A queue that is used in this manner by a dispatcher is known as a dispatch queue. To support different priority levels for different threads, the scheduler may place a high priority thread ahead of other threads in a queue, with the dispatcher looking to the head of that queue for a thread to be dispatched when the corresponding processor becomes available. As mentioned, these components are implemented at the operating system (OS) level.
Within the scheduling system of an OS, certain data constructs are used to represent threads and to store state data relating to those threads. For example, when the OS swaps a thread out of execution (i.e., suspends the thread), the operating system must retain state data (also known as context data) for that thread so that the thread may resume processing from the point of interruption when the thread is swapped back in. The OS distributed by International Business Machines Corporation (IBM) under the name Multiple Virtual Storage (MVS), for example, utilizes task control blocks (TCBs) to schedule and dispatch threads. Further, although other vendors may utilize different names for similar constructs, for purposes of this document, the term TCB refers to any data construct that is utilized by scheduling facilities of an OS to represent threads and store state data for those threads in a multiprocessor system.
While OS-level thread-scheduling facilities provide basic support for multiprocessing, further thread management capabilities can be provided at the application level by establishing queue facilities (e.g., a scheduler, a dispatcher, and another set of thread queues) at the application level. For example, application-level queue facilities can be utilized to assign different priorities to different threads at the application level.
However, a limitation of typical conventional application-level queue facilities is that each thread is bound to a particular TCB (i.e., affinity between threads and TCBs is enforced). Affinity between threads and TCBs (i.e., TCB affinity) is enforced because there are certain functions and system calls that require TCB affinity. For example, if a thread performs an input/output (I/O) function, that function may not complete successfully if the thread does not always execute on the same TCB.
An example of an application that utilizes application-level queue facilities in an environment that also includes OS-level queue facilities is the storage management system known as the TIVOLI® Storage Manager (TSM). In particular, TSM utilizes the queue facilities of a middleware component known as the Service Virtual Machine (SVM). In order to comply with the TCB affinity requirements described above, SVM provides multiple dispatch queues, with each dispatch queue bound to a corresponding TCB. Accordingly, each scheduled thread is also bound to a TCB.
At the OS level, when a thread has no more work to perform, the OS suspends that thread and activates a different thread, selecting the new thread from the OS-level dispatch queue. This process of suspending an exhausted thread and dispatching a new thread consumes a large amount of processing resources. For example, in a typical OS, hundreds of instructions must be executed to swap in a new thread for an old thread. Moreover, such swapping operations are typically serialized (through use of facilities such as local locks) to ensure that the multiple processors do not simultaneously execute conflicting swap instructions. By contrast, only twenty or so instructions might be required to dispatch a new thread at the application level. In general, the number of instructions required to dispatch a thread at the OS level is typically at least one order of magnitude greater than the number required at the application level.
One of the advantages of using application-level queue facilities is the ability to dispatch a series of application-level threads to a single OS-level thread without swapping in a new OS level thread each time one of those application-level threads is dispatched. However, when application-level thread scheduling facilities are utilized to dynamically supply content for an OS-level thread, that OS-level thread begins to depart from the definition of a thread (i.e., a set of instructions that can execute as a single unit), in that an OS-level thread might not be linked to any predetermined set of instructions. Therefore, hereinafter the term TCB is utilized to refer to an OS-level thread that is dynamically supplied with sets of instructions by an application, and the term thread is utilized to refer to an actual set of instructions that is treated as an executable unit at the application level. Accordingly, utilizing this terminology, when a thread on a TCB completes, the application can dispatch a new thread to that TCB, thereby avoiding the overhead associated with swapping in a new TCB.
However, as mentioned above, in conventional systems, all threads are bound to TCBs. Consequently, an application can avoid the overhead of swapping TCBs only for so long as the application has additional waiting threads that are bound to that particular TCB. If a TCB finishes executing a thread and no more threads are scheduled on the dispatch queue for that TCB, the application must relinquish the TCB to the OS. Even if there are waiting threads in queues for other TCBs, the restriction of TCB affinity prevents the application from dispatching threads from those queues to the free TCB.
In SVM, for example, when a thread on a TCB completes execution and no more threads are scheduled on the dispatch queue for that TCB, the dispatcher in SVM relinquishes control of that TCB to the OS by issuing a WAIT command. The WAIT command activates the dispatcher at the OS level, causing the OS to suspend that TCB and dispatch a new TCB from a dispatch queue at the OS level. The new TCB or TCBs that the OS dispatches may be totally unrelated to the application which issued the WAIT command. Then, when a new thread gets scheduled on the dispatch queue for the suspended TCB, SVM will attempt to reactivate the TCB by issuing a POST command to the OS. However, depending on a number of factors (including the priority of other TCBs in the OS queue facilities), it may be some time before the OS responds to the POST by reactivating the required TCB and returning control of that TCB to the application.
In consideration of the time lost waiting for a response to a POST command, the present invention recognizes that, by minimizing the number of times that an application relinquishes control of TCBs, it is possible to enhance the performance of an application. Further, inconsideration of the processing overhead required to swap TCBs, the present invention recognizes that minimizing the number of times that an application relinquishes control of TCBs can also increase a system&apos

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

System and method for utilizing dispatch queues in a... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with System and method for utilizing dispatch queues in a..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and System and method for utilizing dispatch queues in a... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3287694

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.