Apparatus and method for parallel computation

Patent

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

G06F 940

Patent

active

058322728

DESCRIPTION:

BRIEF SUMMARY
This invention relates to parallel computational methods and apparatus, to the way in which a sequential program is compiled for execution by parallel processors, and to the mechanism which co-ordinates the subsequent execution. The invention is applicable to any type of data processing, but it is particularly advantageous where computational speed is of the essence, for example large database searching, airline reservations, weather predictions, oil reservoir modelling, financial analysis and modelling, telecommunications switching networks, and in the human genome mapping project.
Parallel computation gives the theoretical possibility of almost unlimited improvement in execution time, but inevitably there will be some data sharing between the processors during run time, and it is essential to ensure the correct order of operations which interact with the same data variable.
The fundamental impediment in transition from a serial processing tradition to parallel processing is the absence of a unified program control mechanism for the latter. The Von Neumann type of computer achieves logical order and unity of the operations on the data through a sequence of execution steps conducted by a single program control unit. However, by their very nature, asynchronous multiple processor machines are deprived of that ordering mechanism. This gives rise to the principal problem of automatic program parallelisation. More than two decades of intensive research have been directed towards the resolution of this problem. Due to the independent and asynchronous nature of operations in a distributed environment, two "data-dependent" statements of a program have not been able to be parallelised, because this may have led to an indeterminate order of operations on the shared data and hence cause erroneous and invalid computations.
There are three types of data dependency in a sequential program: flow-, anti-, and output-dependency. A data variable, stored in a memory subject to accessing and modifying operations dictated by successive code statements S1 and S2, may be accessed (read) by S2 after it has been modified (written to) by S1, causing flow dependency in the code. If it is accessed by S1 and modified by S2, this causes anti-dependency. If it is modified by both S1 and S2, this causes output dependency. Thus "data dependency analysis" has been used to ascertain whether there exist blocks of code, within the program, which are not data dependent. Then and only then these blocks of code can be parallelised. Any risk of any type of data dependency means that parallelisation cannot be allowed. But accurate data dependency analysis, to assess the risk, is often difficult or not feasible. Hence, the basic rule of the current practice is: "When in doubt, do not parallelise". It means that existing automatic parallelisation schemes can achieve only a very limited degree of
Data dependency is also the main reason that parallelisation of object oriented programs, despite their obvious aptitude for distributed through "methods", i.e. data is accessed only in the form of procedure calls. This implies that parallelisation of the program will involve "inter-procedural analysis", which forces invariably conservative
On the other hand, explicit parallel programming can be tedious, time-consuming and prone to errors. Parallel realisation of an application exhibiting non-trivial, data-dependent parallelism can readily expand into network management and operating system issues. Organisation of deadlock-free communication and synchronisation can get the application programmer entangled in a cross-disciplinary activity which leads away from the original objective of the exercise: to write a workable (and
The purpose of the invention is to exploit as far as possible the inherent advantages of parallel computation unhindered by the fear of possible data-dependency problems, and to implement a conventional sequential program in a parallel computation environment without the need to undertake explicit original programming of the parallel proce

REFERENCES:
patent: 5088034 (1992-02-01), Ihara et al.
patent: 5410696 (1995-04-01), Seki et al.
patent: 5450554 (1995-09-01), Zaiki
patent: 5452461 (1995-09-01), Umekita et al.
patent: 5634059 (1997-05-01), Zaiki
IEICE Transations, vol. E 74, No. 10, Oct. 1991, Tokyo, JP, pp. 3105-3114, XPOOO279295, Hironori Kasahara et al., A Fortran Parallelizing Compilation Scheme for Oscar Using Dependence Graph Analysis.
Computer Architecture News, vol. 17, No. 3, Jun. 1989, New York, U.S., pp. 416-423, XP000035328, Hong-Men Su et al., on Data Synchronization for Multiprocessors.
The 15th Annual International Symposium on Computer Architecture, 30, May 1988, Honolulu, Hawaii, pp. 393-400, Andr Seznec et al., Synchronizing Processors Through Memory, etc.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Apparatus and method for parallel computation does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Apparatus and method for parallel computation, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Apparatus and method for parallel computation will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-701846

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.