Method for optimizing locks in computer programs

Data processing: software development – installation – and managem – Software program development tool – Translation of code

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C717S127000, C717S128000, C717S129000, C717S130000, C717S131000, C717S152000, C717S153000, C717S157000, C717S159000, C712S227000

Reexamination Certificate

active

06530079

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to computer programming. In particular it is a method and variant of the method for a compiler (either static or dynamic), programming development environment or tool, or programmer to transform a program or part of a program so as to reduce the overhead of lock operations by either removing the lock operations or replacing them with simpler operations, while strictly preserving the exact semantics of the program or parts of the program that existed before the transformation.
2. Background Description
Multithreaded programming languages, for example Java™ [5] and programs built from sequential languages but executing in a multithreaded environment, for example Posix [2] often use locking operations. These locking operations are used to enforce the constraint that only one thread of execution may have access to some resource—data, hardware, code, etc. at a time. A thread is a locus of control in a computing environment. In an object-oriented Language such as Java™, the lock is typically associated with an object and is used to ensure mutual exclusion in accessing that object. In those cases, the lock is regarded as a part of that object. (Java is a trademark of Sun Microsystems, Inc.)
Each lock has associated with it some storage that is used to implement the lock. This storage provides a flag to indicate if the lock has been acquired by anyone else. The lock can also provide a queue. The queue provides a place for a thread that attempts to acquire a lock that has already been acquired by another thread to wait for the lock to become free. A thread that is on a queue is quiescent, i.e. it is not actively executing. If a lock provides a queue, it must provide some mechanism for threads on the queue to be notified that they can exit the quiescent state and again attempt to acquire the lock. This mechanism is referred to as the notify operation. Locks may also have side effects associated with them. For example, in the Java™ programming language, or various run-time and hardware systems that implement a release consistency programming model [1], a locking operation must update the globally accessible copy of a variable if required by the semantics of programming language or release consistency model being implemented. Other side effects could include updating tables indicating which locks are held by the program, or providing a point for a breakpoint operation in a debugger. Locking operations, and the locks needed to support the locking operation, have a cost in both an increased execution time of the program and in the amount of computer storage necessary for the program to execute.
Many compilers use a representation called a “call graph” to analyze an entire program. A call graph has nodes representing procedures, and edges representing procedure calls. The term “procedure” is used to refer to subroutines, functions, and also “methods” in object-oriented languages. A direct procedure call, where the callee (called procedure) is known at the call site, is represented by a single edge in the call graph from the caller to the callee. A procedure call, where the callee is not known, such as a “virtual method” call in an object-oriented language or an indirect call through a pointer, is represented by edges from the caller to each possible callee. It is also possible that, given a particular (callee) procedure, all callers of it may not be known. In that case, the call graph would conservatively put edges from all possible callers to that callee.
Within a procedure, many compilers use a representation called the “control flow graph” (CFG). Each node in a CFG represents a “basic block” and the edges represent the flow of control among the basic blocks. A basic block is a straight-line sequence of code that has a single entry (at the beginning) and a single exit (at the end). A statement with a procedure, call does not disrupt a straight-line sequence of code. In the context of languages that support “exceptions”, such as Java™, the definition of a basic block is relaxed to include statements which may throw an exception. In those cases, there is an implicit possible control flow from a statement throwing an exception to the block of code handling the exception. The basic block is not forced to end at each such statement, and instead, such a basic block bb is said to have a flag bb.outEdgeInMiddle set to true.
A topological sort order enumeration of nodes in a graph refers to an enumeration in which, if the graph contains an edge from node x to node y, then x appears before y. If a graph has cycles, then such an enumeration is not guaranteed for nodes involved in a cycle. A reverse topological sort order lists nodes in the reverse order of a topological sort.
Prior art for a similar goal of reducing the synchronization costs of a program by a compiler or programming tool or environment can be found in the papers [3,4,5,6,8,9,10,11,12,13]. These methods do not perform a class of optimizations, that of removing synchronization from objects acted upon by mutual exclusion, or mutex locks, based on the scope in which the lock is accessed. Furthermore, these methods do not handle programs with explicit constructs for multithreading and exceptions (e.g. “try-catch” constructs in Java™).
Prior art for reducing locking or synchronization operations by a compiler, programming tool or environment can be found in the papers [5,6,8,9,10,11,12,13]. The techniques described in these papers look at advance/wait, post/wait/clear and full/empty and extended full/empty or counter based synchronization. All of these synchronization methods enforce ordering (i.e., producer/consumer) synchronization, and the goal of these techniques is to transform programs so as to reduce the amount of ordering synchronization. Ordering synchronization is typified by post/wait/clear synchronization. A post operation on locks involves acquiring a lock on a key K, setting K to a known value (usually 1), and releasing the lock on K. A wait operation on key K involves repeatedly examining the value of K until it reaches the known value. A clear operation first acquires the lock on K, and then sets the value of K to another known value (usually 0). The clear operation is used to initialize K. Thus, by using clear/post/wait, an order can be enforced among the statements that pre cede the post and follow the wait. In particular, all statements before the post can be made to execute before any statement after the wait. All of the techniques described above use the ordering information. In particular, they determine what orders enforced by some ordering synchronization operations are enforced by other ordering synchronization operations, and eliminate the former synchronization operations. In some cases, a reduced number of new operations are introduced to eliminate all of the old operations [9], and in other cases the old state of keys are known after wait operations to reduce the number of initializing clear operations [6].
Prior art for reducing synchronization for mutex locks can be found in the papers [3,4]. In [3], the number of lock operations is reduced by a coarse-graining transformation, which leads to a single lock ensuring mutual exclusion for a coarser grain region, rather than multiple locks ensuring mutual exclusion for various finer-grain regions. While this reduces the number of lock operations, it leads to the problem of false exclusion, where operations that do not need mutual exclusion are also carried out in mutual exclusion. Therefore, the reduction in the number of lock operations comes at a price of potentially increased contention due to the lock. This transformation can sometimes degrade the performance of the program. In [4], the program is transformed so that multiple lock operations on the same object are replaced by a single lock operation on that object. While eliminating lock operations, this method has to retain at

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Method for optimizing locks in computer programs does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Method for optimizing locks in computer programs, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method for optimizing locks in computer programs will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3070528

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.