Electrical computers and digital processing systems: multicomput – Computer-to-computer data routing – Least weight routing
Reexamination Certificate
1996-01-04
2003-07-22
Courtenay, III, St. John (Department: 2126)
Electrical computers and digital processing systems: multicomput
Computer-to-computer data routing
Least weight routing
Reexamination Certificate
active
06598068
ABSTRACT:
FIELD OF THE INVENTION
The present invention relates to computer systems in which a single program may have multiple simultaneously-active threads, and more specifically, to a method and apparatus for managing access to a shared resource among multiple threads of the same program.
BACKGROUND OF THE INVENTION
During execution, processes use resources, such as memory, modems and printers. To take full advantage of resources, operating systems have been developed which allow multiple processes to share resources. Thus, instead of having to purchase one printer for every word processing program, users may purchase a single printer that may be shared by multiple concurrently-executing word processing programs.
Even though resources may be shared, as described above, many resources may not be used by more than one entity at any given time. Resources that cannot be reliably used by more than one entity at a time are referred to hereafter as single-access resources. For example, most printers are single-access resources because they are unable to print documents for more than one word processor at a time. Consequently, many operating systems include mechanisms that prevent two or more processes from simultaneously accessing single-access resources.
To more fully utilize the processing capacity of a computer system, some computer programs are designed to create “worker threads” to perform specific tasks. Worker threads are sequences of instructions that are part of the same program as the main program, but which may be scheduled for execution and executed independent of the execution of the main program. In multi-threaded systems, the main program itself is considered a thread (the “main thread”).
The main thread of an event-driven multi-threaded program is typically responsible for generating visual feedback to let the user know that the user's interaction with user interface controls was detected by the system. For example, when a user selects a button on a control panel, it is typically the responsibility of the main thread of the program associated with the control panel to update the display to indicate that the button was pushed.
Upon initialization, the main thread of an event-driven multi-threaded program typically causes the windowing system to display user interface objects and then informs the operating system about the events in which it is interested (the “events-of-interest”). Such events typically include user interaction with the user interface objects.
After informing the operating system about the events-of-interest, the main thread becomes suspended by the operating system. Typically, the main thread remains suspended until one of the events-of-interest occurs. Upon the occurrence of an event-of-interest, the operating system places a message describing the event in a message queue associated with the multi-threaded program and activates the main thread. The main thread updates the display to provide visual feedback to the user. In addition, the main thread may itself perform some task associated with the selection of the button, or create a worker thread to perform the task.
Many threads of the same program may be active at the same time. If a computer system includes more than one processor, the various threads may even be executed concurrently. For example, when the main thread of multi-threaded program receives a message, it may create a new worker thread to process the message. When a second message arrives, the main thread may create a second worker thread to process the second message. The main thread, the first worker thread and the second worker thread may all be simultaneously active.
Each of the threads has its own resource requirements, determined by the task that the thread is to perform. For example, both the first and second threads described above may generate visual output to be displayed to a user. Therefore, both threads would require a connection to the window system.
In some environments, a multi-threaded program may be allocated only one connection to the window system. Therefore, the various threads of a program that desire to display information must share the connection. However, connections to window systems are typically single-access resources.
For example, assume that a first worker thread is in the process of sending display commands down the connection when a thread context switch occurs. After the thread context switch, the second worker thread starts sending display commands down the connection. When this occurs, the window system will only see garbage. The window system may respond to the garbage by killing the multi-threaded program or by crashing.
Therefore, just as multiple programs must be prevented from simultaneously accessing a shared single-access resource, so must threads of the same program. Unfortunately, most operating systems do not prevent concurrent access to resources by threads within a single program. Therefore, the responsibility of providing a mechanism for controlling the access to resources between threads typically falls on the application developer.
The standard solution to the problem of managing resource access between threads of a program is to provide a “lock” for each shared single-access resource. A typical lock is a data structure that has an owner, a state, and is associated with a shared resource. To guarantee that a single-access resource is accessed by one thread at a time, the application developer must design each thread to obtain control of the lock associated with the resource before the thread accesses the resource. If the lock is held by another thread, then the thread must wait until the lock is released. If not all threads respect this locking policy, then the single-access resource is at risk of being corrupted, or the program may crash, or both.
To provide such a locking mechanism, the application developer typically must write custom code for each of the threads of the program so that the threads will set and release the lock as needed. This is particularly difficult and tedious because the main thread (which in event-driven programs includes an event loop) is often part of a base library provided to application developers by third parties. Further, in some software development environments, the event loop is hidden from the application developer, so the application developers are unable to rewrite it to add lock-related functionality.
In conventional locking mechanisms, all threads (including the main thread) must request a lock when they need to use the resource associated with the lock, and release the lock when done using the resource associated with the lock. Thus, when the main thread has finished processing all of the messages in its message queue, the main thread releases the lock associated with the window system connection and then becomes suspended until more messages are available. While the main thread remains suspended, any active worker threads may gain control of the lock and access the corresponding resource. The operating system maintains a queue of all the threads that are waiting for a lock. When a lock is released by a thread, the operating system picks one of the threads in the queue associated with the lock and grants the lock to the selected thread.
FIG. 1
illustrates the timing of a particular set of operations in a computer system with a lock mechanism that is designed according to the approach described above. Referring to
FIG. 1
, at time T
0
three threads are active: a main thread, a worker thread
1
and a worker thread
2
. The main thread of the program initially holds the lock on the resource in question. At time T
1
, the main thread has finished using the resource associated with the lock, and therefore releases the lock. Also at time T
1
, the worker thread
2
requires to use the resource, and therefore requests the lock.
At times T
2
, the worker thread
2
acquires the lock and the worker thread
1
requests the lock. At times T
3
and T
4
, the worker thread
2
performs work that requires use of the resource associated with the lock and re
Beyer Weaver & Thomas LLP
Courtenay III St. John
Sun Microsystems Inc.
LandOfFree
Method and apparatus for automatically managing concurrent... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Method and apparatus for automatically managing concurrent..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and apparatus for automatically managing concurrent... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3099588