Electrical computers and digital processing systems: multicomput – Computer-to-computer data routing – Least weight routing
Reexamination Certificate
1998-06-05
2001-03-06
Oberley, Alvin E. (Department: 2755)
Electrical computers and digital processing systems: multicomput
Computer-to-computer data routing
Least weight routing
C709S241000, C709S241000
Reexamination Certificate
active
06199094
ABSTRACT:
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to computer performance, and deals more particularly with a technique, system, and computer program for protecting shared resources in a novel, efficient manner using mutual exclusion semaphores, whereby the number of semaphores required is greatly reduced.
2. Description of the Related Art
Many modern computer programs are written to take advantage of multiprogramming, whereby more than one program appears to execute concurrently. The appearance of concurrency is achieved by allowing each of the multiple programs to execute for some limited period of time, followed by execution of a different one of the multiple programs. This execution for a limited time is repeated over and over, allocating use of the central processing unit (CPU) to the multiple programs by what is known in the art as “time slicing”. Multiple threads may execute within a single process, implementing yet another type of concurrent use of the CPU. A process is an instance of a running program. A thread is a single execution path within such a program. Operating systems provide built-in mechanisms for switching between concurrent programs and/or threads in a very quick and efficient manner. Because the switching occurs so quickly, it appears that the concurrent programs and/or threads are executing in parallel (although they are actually executing in serial). Because the concepts of the present invention apply equally to concurrent programs and concurrent threads, those terms are used interchangeably herein.
As various applications execute, they invariably need to access the resources of the system. Shared use of the CPU resource has been briefly discussed above, with reference to time slicing. Other sharable resources of the system that the concurrent programs may need access to include memory and disk files. It is necessary for these sharable resources to be shared in such a manner that the integrity of the data contained therein is protected. (Note that the phrases “shared resource” and “sharable resource” are used interchangeably herein.)
For purposes of illustrating the concerns for concurrent access to sharable resources, suppose the resource of interest is payroll information, and two of the concurrently-executing programs are payroll programs. The first program may read the stored value of the year-to-date salary for each employee, add the current pay period salary to that figure, and rewrite the updated value. The second program may be a program for printing income tax information. This second program must be properly synchronized with the first, so that all values are printed from the updated information, or from the non-updated information, depending on the details of the programs and the timing in which they are executed: if information for some employees is printed using updated information, and information for other employees is printed using non-updated information, the tax information will be inconsistent and inaccurate.
While it is unlikely that two programs of the specific nature used in this example would be executed concurrently, the example provides an illustration of the problems that may occur in the general situation where one program reads and writes data, and any other program also needs to access that data.
A typical solution to allowing multiple programs or threads to access sharable resources, while protecting the integrity of those resources, is to use a mutual-exclusion semaphore (also referred to in the art as a binary semaphore, or mutex semaphore, where “mutex” is an abbreviation for “mutual exclusion”). A semaphore is a protected variable provided by the operating system. The semaphore serializes access to the shared resource, so that use of the resource by multiple programs is mutually exclusive: if one program is using the resource, then any other program that wants access to that resource must wait. This mutual exclusion by serialized access will be referred to herein as “protecting” or “locking” the shared resource. When exclusive access to the shared resource is no longer required, the program holding the semaphore releases the semaphore, which “unlocks” the resource. (“Holding” a semaphore signifies that a user is using the semaphore to protect a shared resource.) A binary semaphore is implemented using a variable that takes on only the values zero and one. When the value is zero, this indicates that no user is holding the semaphore, and the sharable resource is not currently being accessed. When a program wishes to access that resource, the semaphore is incremented to the value one. Other programs interested in accessing the resource will detect that it is being used, and is not available, by seeing that the semaphore value has been set to one. The concepts of semaphores, and how they are implemented in order to provide mutual exclusion, are well known in the art. Reference may be made to “An Introduction to Operating Systems”, by H. M. Deitel and published by Addison-Wesley, pp. 89-95 (1983), for an explanation of semaphores.
In addition to the need to serialize access to sharable resources in general, the information represented therein may need to be protected on a finer level of granularity. For example, if the information to be protected occupies a large amount of memory, limiting access to all the applicable memory locations as a single unit will likely result in operating inefficiencies. For example, suppose the memory locations represent one or more large records from a database. When the thread using those records is swapped out (that is, its time slice ends, and it therefore stops executing temporarily) and a different thread begins executing, that subsequent thread may also require access to the records in memory. If they are locked by the thread that has swapped out, then this current thread cannot access them and therefore cannot do productive work, and will waste its time slice. By protecting the information at more granular levels, for example by locking only the specific record being accessed, the likelihood of the second thread being able to gain access to the data it needs is greatly increased.
Commonly, providing granular access to sharable resource is achieved by grouping the resources into logical sets, whereby one semaphore protects access to all members of the set. The set members will be referred to herein as “objects”, although this is not meant to imply that applicability is limited to object-oriented programming systems. An object may be any type of data representation, such as a single record or field. Alternatively, it may be a more complicated data structure such as a tree or table.
Two techniques for protecting the objects in a set are known in the art. The first technique is to use one mutual-exclusion semaphore to protect the entire set, and the second is to use one semaphore for each object in the set. The limitations of each of these approaches will now be discussed.
The first technique (using one semaphore for the entire set of objects) is simplest to implement. It is also the more general solution, because it allows set membership to easily change during operation. (For example, if an entire table is being protected, any new rows added to that table as a thread executes will automatically be protected.) However, there is a significant serialization penalty inherent in this technique, in that different threads may need access to different set members. These multiple threads cannot access the set members in parallel because access to the entire set is limited to the single thread which is holding the semaphore. As each thread begins its time slice, it will find the resource it needs unavailable, as discussed earlier, and will thus give up its turn to access the CPU. Therefore, more granular access to the objects is needed than is provided by this first technique.
The second technique (using one semaphore for each set member) maximizes parallelism in the system: since each set member has its own semaphore, theoretically one thread could be accessing e
Clay A. Bruce
Doubet Marcia L.
International Business Machines Corp.
Nguyen Van
Oberley Alvin E.
LandOfFree
Protecting shared resources using mutex striping does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Protecting shared resources using mutex striping, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Protecting shared resources using mutex striping will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2508222