User level adaptive thread blocking

Electrical computers and digital processing systems: multicomput – Computer-to-computer data routing – Least weight routing

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C709S241000, C710S200000

Reexamination Certificate

active

06223204

ABSTRACT:

TECHNICAL FIELD
The present invention relates to computer systems, and more particularly relates to improved methods, apparatuses, and computer program products for allocating resources in multithreading computing environments.
BACKGROUND OF THE INVENTION
FIG. 1
is a diagram of a conventional multithreaded computer memory
2
connected to first and second data processors
3
particularly identified as first and second processors
3
a
and
3
b.
Multithreaded computer operations can however be implemented with a single data processor as well. Multithreaded computer systems are disclosed in U.S. Pat. No. 5,515,538, “Apparatus and Method for Interrupt Handling in a Multi-threaded Operating System Kernel,” granted in 1996 to inventor Steven R. Kleiman and assigned to Sun Microsystems, Inc., of Mountain View, Calif. That patent is hereby expressly incorporated hereinto and made a part of the present application. computer memory
2
includes a user level memory region
2
a
and a kernel level memory region
2
b.
A multithreaded computer memory is a computer memory on which multiple threads are being executed. A thread is an independent program code execution sequence. User level memory region
2
a
is shown possessed by a plurality of threads
5
including thread
5
a
through thread
5
f,
a threads library
8
, a data element
9
, and a mutex (i.e., mutual exclusion) lock
9
a.
Kernel level memory region
2
b
is shown possessed by a plurality of light weight process
12
and a run queue
14
. Data element
9
is code or information required for processing by a particular thread. The plurality of light weight processes
12
includes light weight processes
12
a
-
12
d.
Threads library
8
is a mechanism for scheduling individual ones of threads
5
onto particular ones of light weight processes (“LWPs”). A scheduled thread blocks other threads from running on an associated LWP until the scheduled thread has completed running through its execution sequence. For details regarding threads and light weight processes, see for example
Programming with UNIX Threads
by Charles J. Northrup (John Wiley & Sons, Inc., 1976), pp. 4-6. Briefly, light weight processes are kernel entities which are scheduled to run entirely within a kernel level memory region
2
b.
Threads
5
are scheduled at user level memory
2
a
onto LWPs
12
. Particular LWPs
12
are in turn scheduled onto particular ones of processors
3
. Run queue
14
contains information for scheduling LWPs
12
onto multiple processors
3
a
and
3
b.
For example, of six threads
5
a
-
5
f
which
FIG. 1
shows, only four threads
5
c
-
5
f
are shown scheduled onto corresponding four LWPs
12
a
-
12
d.
Further, of four LWPs
12
a
-
12
d,
only two LWPs
12
c
-
12
d
are scheduled onto respective processors,
3
a
and
3
b.
User level memory
2
a
further includes a multiple exclusion lock (i.e., mutex)
9
a
associated with data element
9
. Thread
5
f
of user level memory
2
a
is shown connected by line
9
a
′ to mutex lock
9
a
to represent that thread
5
f
owns mutex lock
9
a
momentarily and that no other thread can access data element
9
while the owning thread runs. Line
9
a
′ suggests that the execution sequence of thread
5
f
is dependent on data element
9
which is protected by mutex lock
9
a.
Unfortunately, the run status of light weight processes is not available within user level memory region
2
a.
This presents a technical problem which is desirably overcome. Accordingly, when a thread which has been scheduled onto a particular LWP seeks to acquire a particular lock and access to data associated with the particular lock, the thread waits for the associated light weight process to complete executing its current scheduled process whether the current light weight process is already running or whether it shows no indication of running in the future. Priority inversion of threads thus results when the scheduled threads are spinning for excessive periods of time waiting for a prior light weight process to complete execution. Such waiting may block timely scheduling of higher priority threads.
SUMMARY OF THE INVENTION
According to the present invention, a computer apparatus includes cooperative user level and operating system level memory regions, in which threads in the user level memory region are scheduled onto light weight processes in the operating system level memory region according to light weight process run states. The computer apparatus, according to the present invention, includes at least a single data processor which runs light weight processes in accordance with the present invention. The threads are scheduled onto the light weight processes by a threads library which receives information from an operating system data structure containing the run states of the light weight processes. According to one embodiment of the present invention, the operating system level memory region includes a data structure containing light weight process run state conditions which are provided to the user level memory region for use by the threads library in scheduling threads onto light weight processes. If a thread is scheduled onto a non-running light weight process, the thread is blocked, in accordance with the present invention, and the thread goes to sleep. This is advantageous particularly for high priority threads which consume substantial processing resources in a spin state, because if such a high priority light weight process is put to sleep, processing resources can be applied to low priority processes which have applied a lock on certain data. This allows the low priority light weight process to reach process completion rather than being preempted in terms of processing time by higher priority processes. Such preemption impedes completion of processing of low priority processes. Further, according to the present invention, light weight process states are mapped onto user level memory regions to permit threads to spin when a target light weight process is running, but to sleep (e.g., block) when a target light weight process is not running. According to the present invention, a computer program product provides code which stores the run status of at least a single light weight process, and code which makes run status information of light weight processes accessible by user level memory. According to the present invention, acquiring threads spin in a busy waiting loop if scheduled onto running light weight processes. However, if a particular lock on thread required code or data however is not expected to become available soon (as indicated by a non-running light weight process), a thread seeking to be scheduled onto a light weight process is instead directed to go to sleep and to wait to be awakened at a time when the lock opens and the sought data becomes available. When the acquiring thread sleeps it is said to “block”. When a thread sleeps, the processor which is running the light weight process associated with the thread is able to accomplish other tasks, until the lock becomes unavailable. The lock is adaptive, because the lock's scheduling state and the run status of the owning thread determines whether the new thread will spin or sleep/block. In particular, according to the present invention, if a lock owning a particular thread and a particular data element is currently running, other threads trying to acquire the lock will keep on trying to acquire the lock, since it is assumed the lock will become available soon because it is already running. On the other hand, if the lock owner and thread are not running, other threads trying to acquire the lock will go to sleep, since in that non-running condition the lock is unlikely to become available soon.


REFERENCES:
patent: 5452452 (1995-09-01), Gaetner et al.
patent: 5515538 (1996-05-01), Kleiman
patent: 5524247 (1996-06-01), Mizuno
patent: 5542088 (1996-07-01), Jennings, Jr. et al.
patent: 5590326 (1996-12-01), Manabe
patent: 5815689 (1998-09-01), Shaw et al.
patent: 5822588 (1998-10-01), Sterling et al.
Powell, Kleiman, B

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

User level adaptive thread blocking does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with User level adaptive thread blocking, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and User level adaptive thread blocking will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2445244

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.