Method of using delays to speed processing of inferred...

Electrical computers and digital processing systems: memory – Storage accessing and control – Control technique

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S119000, C711S151000

Reexamination Certificate

active

06460124

ABSTRACT:

CROSS-REFERENCE TO RELATED APPLICATIONS
Background of the Invention
The present invention relates to computer architectures for multiprocessor systems and in particular to an architecture providing improved cache control when coordinating the exclusive use of data among multiple processors.
Computer architectures employing multiple processors working with a common memory are of particular interest in web servers. In such an application, each processor may serve a different web site whose content and programs are shared from the common memory.
In situations like this, each of the processors may need to modify the shared data. For example, in the implementation of a transaction-based reservation system, multiple processors handling reservations for different customers must read and write common data indicating the number of seats available. If the processors are not coordinated in their use of the common data, serious error can occur. For example, a first processor may read a variable indicating an available airline seat and then set that variable indicating that the seat has been reserved by the processor's customer. If a second processor reads the same variable prior to setting by the first processor, it may, based on that read, erroneously set the variable again, with the result that the seat is double booked.
To avoid these problems, it is common to use synchronizing instructions for portions of a code (often called critical sections) in which simultaneous access by more than one processor is prohibited.
Synchronizing instructions may be used in two general ways. The first is that the synchronization instruction may provide an atomic function, that is, a function that cannot be interrupted, once begun, by any other processor. Such instructions may perform an atomic read/modify/write sequence as could be used in the above example. A modification of this use is a pair of “bookend” synchronization instructions (such as Load Lock/Store Conditional) that provide a quasi-atomic execution of intervening instructions, in which interruption by other processors cannot be prevented, but can be detected so that the instructions may be repeated until no interruption occurs.
The second way is that the synchronizing instruction may be used to take ownership of a lock variable, which must be owned for modification of other shared data. An atomic synchronization instruction is used to check for the availability of the lock (by checking its value) and if it is available, to take ownership of the lock variable (by changing its value).
In the first use of synchronization instructions, the critical section is short and well defined by the critical section. In the second case, where a lock variable is acquired, the critical section may be arbitrarily long and is not well defined. “Synchronization instruction” as used herein refers broadly to memory access instruction that permits mutual exclusion operations, that is, the exclusion of concurrent access to the same memory addresses by other processors during the access operations.
Like single processor systems, multiprocessors systems may employ cache memory. Cache memory is typically redundant local memory of limited size that may be accessed much faster than the larger main memory. A cache controller associated with the cache attempts to prefetch data that will be used by an executing program and thus to eliminate the delay required for accessing data on the main memory. The use of cache memory generally recognizes the reality that processor speeds are much faster than memory access speeds.
In multiprocessor systems, sophisticated cache coordination protocols, known in the art, are used to ensure that multiple copies of data from the main memory are properly managed to avoid errors caused by different processors working on their different cache copies of main memory. These protocols may work by means of bus communication between different cache controllers, or by using a single common directory indicating the status of multiple caches and their contents. In these cases, the protocols provide for unique ownership of data when a processor writes to its cache copy through an invalidation of other copies. Alternatively, the protocols may broadcast all processor writes without providing for unique ownership.
In addition to the obvious delays resulting from lock contention, synchronization instructions used in multiprocessor systems can create inefficiencies in the movement of data between main memory and the caches of the multiple processors. For example, after execution of the synchronization instructions necessary to acquire a lock variable by a first processor, and the loading of a cache line holding the lock variable into the cache of the first processor, a second processor may attempt to acquire the same lock. The lock variable is then transferred to the cache of the second processor, where it cannot be acquired because the lock is already owned, and then must be transferred back again to the first processor for release of the lock, and then transferred to the second processor again for the lock to be acquired. As is understood in the art, a cache line is the normal smallest unit of data transfer into a cache from another cache or memory.
One of the present inventors has recognized in a jointly authored prior art paper entitled
Efficient Synchronization Primitives For Large-Scale Cache-Coherent Shared-Memory Multiprocessors,
published April 1989 in the “Proceedings of the Third Symposium on Architectural Support for Programming Languages and Operating Systems”, pgs. 64-75, that many of these problems could be avoided by having the programmer or compiler explicitly identify critical sections. By providing an explicit demarcation of the critical section through special delimiting instructions, a processor holding a lock as part of the execution of a critical section would be empowered to defer requests by other caches for the cache line holding the lock variable until the lock was released. Each processor waiting for the lock, including the deferred processor, would effectively form a queue for that lock providing a more efficient method of sharing access to the common synchronized data.
Unfortunately such a system requires both a change in architecture and a fundamental rewriting of existing programs and/or compilers in order to indicate the boundaries of the critical sections. While such changes may occur on future generations of programming languages and programs, they do not address the large body of existing programs that might be executed in a multiprocessor system.
BRIEF SUMMARY OF THE INVENTION
The present invention recognizes that with a high degree of reliability, the location and size of a critical section can be inferred without the need for special delineators. Generally, the beginning of the critical section may be inferred by the occurrence of any of a number of pre-existing synchronization instructions. The end of the synchronizing section, while less easy to determine, may be inferred from a second synchronization instruction forming part of a bookend synchronization instruction pair or by the writing to the same memory location accessed by the first synchronization instruction, which is assumed to be a release of a lock variable.
Specifically then, the present invention provides a method of controlling a cache used in a computer having multiple processors and caches communicating through memory. In a first step of the method, as a program is executed by a first processor, a probable initiation of a critical section in the program is inferred from the standard instructions being executed. In response to this inference, the cache of the first processor is loaded with at least one synchronization variable. Prior to completion of the critical section of the executed program, response to other caches requesting write access to the synchronization variable is delayed.
It is therefore one object of the invention to improve data flow between caches during execution of a critical section of a program, in a way that will work with p

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Method of using delays to speed processing of inferred... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Method of using delays to speed processing of inferred..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method of using delays to speed processing of inferred... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2936152

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.