Apparatuses and methods for programming parallel computers

Electrical computers and digital processing systems: processing – Architecture based instruction processing – Multiprocessor instruction

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

Reexamination Certificate

active

06311265

ABSTRACT:

COPYRIGHT NOTICE
A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by any one of the patent disclosure, as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all copyright rights whatsoever.
FIELD OF THE INVENTION
The present invention relates to apparatuses and methods for programming parallel computers.
BACKGROUND OF THE INVENTION
Despite the rapid increase in the power of individual computer processors, there are many present and potential applications which could benefit from much greater computing power than can be provided by any individual present or foreseeable processor. The major approach to such greater computing power is to use parallel computers, that is, computers having more than one processor. Many different types of parallel computers have been designed, ranging from Symmetric Multi-Processing systems, in which each of multiple processors and some cache memory share main memory and all of the computer's other resources, to so-called shared-nothing systems, where each processor has its own separate, often relatively large, main memory, often has its own mass storage device, and is connected to other processors only by a computer network. The number of processors in current parallel computers vary from two to tens of thousands.
Parallel computers can provide a huge amount of raw computational power, as measured by all of the instructions per second which their multiple processors can execute. The major problem restricting the use of parallel computing has been the difficulty in designing programming techniques that can efficiently and reliably use the computational power of parallel computers to perform useful tasks.
This problem arises for multiple reasons. First, most useful tasks which one might want to accomplish through a parallel computer require that processes be distributed to the various processors, or nodes, of the computer and that those processes then communicate with each other. This requires that the code for a process be made available to the node that it is to run on, that a command be given to run that process on that node, that the process determine the nodes on which all other processes it is to talk to are running on, and then that it establish communication links with those other processes. If a given individual task is to be parallelized, a decision has to be made as to which portion of the data to be processed by that task should be routed to each of the processes that is executing it. In addition there are many other details that have to be attended to for a task of any reasonable complexity to be programmed to run on multiple processors. Thus, it has traditionally been a very complex task to write programs for parallel computers.
Not only is it difficult to write programs for parallel computers, but it can also be extremely difficult to make such programs work properly. This is because the execution of a parallel program, instead of involving only one process, as do most programs, involves many different processes, each of which might run at differing rates and have differing behaviors each time the program is executed. This means that there are all sorts of synchronization problems which can result between processes; it means that execution is much more complex and, thus, more difficult to fully understand; and it means that finding errors in parallel programs, that is, debugging them, can be much more complex.
Over the years there have been many attempts to deal with the problem of programming parallel computers. One approach has been to design parallel programming languages having constructs designed to facilitate the description of all the complexities necessary for parallelization. But even with such languages, the complexity of parallel programming remains considerable. Another approach has been to have parallel compilers which take code which could run in a single process and automatically parallelize it to run on multiple processors. While such compilers do a very good job of removing the complexity of parallelization from the programmer, they usually make very inefficient use of a parallel computer. This is because such compiler's parallelization mechanism are very general, and, thus, they are often ill suited to provide efficient parallelization for a particular piece of code.
Parallel relational data base management systems (herein “RDBMS”s for short) use another approach to dealing with the complexity of parallel programming. Such systems enable a user to issue a statement in a data base query language, such as Structured Query Language, or SQL. The system then parses this statement and automatically derives from it a corresponding data flow graph which is executed in a parallel manner. The data flow graph is comprised of a sequence of one or more operators, each of which has an associated subroutine, some of which are parallelizable. The graph connects together the operators of the graph with data flow links through which records from the data base are processed. The RDBMSs automatically parallelize the graph, causing a separate instance of individual parallelizable operators in the graph to be run on each of a plurality of nodes. Different partitions of the data base table can be fed through the data links to different instances of the same operator, defining a multi-branched tree. Such RDBMS systems make good use of parallelism, but their capabilities are limited to reading from and writing to parallel data bases in response to statements in a RDBMS language. Generality is thereby restricted.
SUMMARY OF THE INVENTION
It is an object of the present invention to provide apparatuses and methods for more easily programming parallel computers to operate in a relatively efficient manner.


REFERENCES:
patent: 4298954 (1981-11-01), Bigelow et al.
patent: 4823256 (1989-04-01), Bishop et al.
patent: 4937777 (1990-06-01), Flood
patent: 5237691 (1993-08-01), Robinson
patent: 5355492 (1994-10-01), Frankel
patent: 5381548 (1995-01-01), Matsuo
patent: 5450313 (1995-09-01), Gilbert
patent: 5487167 (1996-01-01), Dinallo et al.
patent: 5557797 (1996-09-01), Yano
patent: 5568614 (1996-10-01), Mendelson et al.
patent: 5721928 (1998-02-01), Umehara
patent: 5903703 (1999-05-01), Okada
patent: 5903730 (1999-05-01), Asai et al.
patent: 6021457 (2000-02-01), Archer et al.
Sodan, Communications of the ACM, “Yin and Yang in Computer Science”, vol. 41, No. 4, pp. 103-111, Apr. 1998.
Zaki et al, SPAA, “A Localized Algorithm for Parallel Association Mining”, pp321-330, 1997.
Morrison, Flow-Based Programming, “A New Approach to Application Development”, Chapters 1-29, 1994.
Barclay et al., Sigmod Record, “Loading Databases Using Dataflow Parallelism”, vol. 23, No. 4, pp. 72-83, Dec. 1994.
Cummings et al., IEEE, “Checkpoint/Rollback in a Distibuted System Using Coarse-Grained Dataflow”, pp. 424-433, 1994.
DeWitt et al., Communications of the ACM, “Parallel Database Systems: The Future of High Performance Database Systems”, vol. 35, No. 6, pp. 85-98, Jun. 1992.
DeWitt et al., Computer Sciences Technical Report #1079, “Parallel Database Systems: The Future of High Performance Database Systems”, pp. 1-27, Feb. 1992.
Gray et al., Morgan Kaufmann Publishers, “Transaction Processing: Concepts and Techniques”, 1993. pp. 211-217.
Lau et al., IEEE, Propagating Buffer: A New Approach to Deadlock Freedom in Store-and Forward Networks, pp. 804-807, 1991.
Lucco et al., ACM, “Parallel Programming With Coordination Structures”, pp. 197-208, 1990.
DeWitt et al., IEEE Transactions on Knowledge and Data Engineering,. “The Gamma Database Machine Project”, vol. 2, No. 1, pp. 44-63, Mar. 1990.
DeWitt et al., The Gamma Database Machine Project,. “The Gamma Database Machine Project”, pp. 609-626.
Li et al., ACM, “Real-Time, Concurrent Checkpoint for Parallel Programs”, pp. 79-88, 1990.
Morison, R., The Third Conference on Hypercube Concurrent Computers and Applications, ACM press, Interactive

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Apparatuses and methods for programming parallel computers does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Apparatuses and methods for programming parallel computers, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Apparatuses and methods for programming parallel computers will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2600348

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.