Electrical computers and digital processing systems: memory – Storage accessing and control – Shared memory area
Reexamination Certificate
2003-02-13
2008-11-11
Ellis, Kevin (Department: 2187)
Electrical computers and digital processing systems: memory
Storage accessing and control
Shared memory area
C711S202000, C719S313000
Reexamination Certificate
active
07451278
ABSTRACT:
Mapping of cacheable memory pages from other processes in a parallel job provides a very efficient mechanism for inter-process communication. A trivial address computation can then be used to look up a virtual address that allows the use of cacheable loads and stores to directly access or update the memory of other processes in the job for communication purposes. When an interconnection network permits the cacheable access of one host's memory from another host in the cluster, kernel and library software can map memory from processes on other hosts, in addition to the memory on the same host. This mapping can be done at the start of a parallel job using a system library interface. A function in an application programming interface provides a user-level, fast lookup of a virtual address that references data regions residing on all of the processes in a parallel job running across multiple hosts.
REFERENCES:
patent: 5845331 (1998-12-01), Carter et al.
patent: 6212617 (2001-04-01), Hardwick
Goodheart, Berny and Cox, James; The Magic Garden Explained: The Internals of UNIX System V Release 4; 1994; Prentice Hall of Australia Pty Ltd; pp. 12, 25, 129-134, 144-147, 252-257, 551-559.
MPI-2: Extensions to the Message-Passing Interface; Sep. 10, 2001; Message Passing Interface Forum; available at http://www-unix.mcs.anl.gov/mpi/mpi-standard/mpi-report-2.0/mpi2-report.htm; Sections 6.3.1 Put.
Tanenbaum, Modern Operating Systems, 2001, Prentice-Hall Inc., second edition, pp. 681, 692-693, 700-710, 710-712.
Message Passing Interface (MPI) FAQ, http://www.faqs.org/faqs/mpi-faq/, retreived from internet Apr. 2, 2007, published May 2, 1999, pp. 1-10.
Lee et al., An Efficient, Protected Message Interface, IEEE, Nov. 1998, Computer, vol. 31 No. 11, pp. 69-75.
Tanenbaum, Modern Operating Systems, 2001, Prentice-Hall Inc., second edition, p. 23.
Fillo et al., The M-Machine Multicomputer, Proceedings of the 28th annual international symposium on Microarchitecture, IEEE Computer Society Press, 1995, pp. 146-156.
Tanenbaum, Andrew S. and Van Steen, Maarten, Distributed Systems Principles and Paradigms, 2002, Prentice-Hall Inc., pp. 16-19 and 28-31.
Global Arrays, www.emsl.pnl.gov:2080/docs/global/ printed on Jan. 29, 2003, 5 pp.
J. Nieplocha et al., “The Global Array Programming Model for High Performance Scientific Computing”, SIAM News, Aug./Sep. 1995, pp. 1-9.
J. Nieplocha et al., “Global Arrays: A Non-Uniform-Memory-Access Programming Model for High-Performance Computers”, The Journal of Supercomputing, vol. 10, 1996, 169-189, pp. 1-17.
J. Nieplocha et al., “Global Arrays: A Portable “Shared-Memory” Programming Model for Distributed Memory Computers” pp. 1-10.
J. Taft, “Achieving 60 GFLOP/s on the production CFD code Overflow-MLP”, PARCO Journal, Article No. 1587, Dec. 15, 2000, pp. 1-16.
M. Snir, et al., “MPI The Complete Reference”, 1996.
Feind Karl
Higgins Dan
McMahon Kim
Nelson Dean
Roe Dean
Ellis Kevin
Rutz Jared I
Silicon Graphics Inc.
Staas & Halsey , LLP
LandOfFree
Global pointers for scalable parallel applications does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Global pointers for scalable parallel applications, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Global pointers for scalable parallel applications will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-4028387