Interrupt mechanism for shared memory message passing

Electrical computers and digital processing systems: interprogra – Interprogram communication using message

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C719S312000, C710S260000

Reexamination Certificate

active

06799317

ABSTRACT:

FIELD OF THE INVENTION
The invention relates to multiprocessor computers and more particularly to a message passing interface (MPI) application programming interface (API) for passing messages between multiple tasks or processes. The present invention is even more particularly related to message passing using a shared memory buffer.
TRADEMARKS
S/390 and IBM are registered trademarks of International Business Machines Corporation, Armonk, N.Y., U.S.A. and Lotus is a registered trademark of its subsidiary Lotus Development Corporation, an independent subsidiary of International Business Machines Corporation, Armonk, N.Y. Other names may be registered trademarks or product names of International Business Machines Corporation or other companies.
BACKGROUND
Message Passing Interface (MPI) defines a standard application programming interface (API) for using several processes at one time to solve a single large problem called a “job” on a symmetric multiprocessor and often multi-node computer (i.e., commonly one process per node). Message passing is equally applicable however to a uniprocessor computer. Each job can include multiple processes. A process can also commonly be referred to as a task. Another software structure analogous to a task is a thread, which can be thought of as a small software component used in multi-tasking, multi-threaded software systems.
Each process, task or thread can compute independently except when it needs to exchange data with another task. When the process, task or thread needs to pass data from, e.g., one task to another, the process is known as passing a “message.” Examples of symmetric multiprocessor computers include, e.g., an IBM RISC System 6000/SP available from IBM Corporation, Armonk, N.Y., and supercomputers available from Cray, Silicon Graphics, Hewlett Packard, Thinking Machines, and other computers from such companies as SUN Microsystems, Hewlett Packard, Intel, and the like.
Specifically, a programmer can use an explicit MPI_SEND to identify what data from the memory of a source task is to be sent as a given message. The programmer can also use an explicit MPI_RECV at a destination task to identify where the data is to be placed in the receiver memory.
To simplify the description which follows, sending of messages will be described although the same processing would apply to the receiving of messages. To send a message, data is gathered from memory and fed to a transport layer at the rate that the transport layer is able to accept. Bytes of a message are forwarded in chunks which can be known as packets and the transport layer can dictate the size of each chunk. When the transport layer is ready to accept N bytes, then N bytes are copied from the proper memory locations into a transport buffer which can be referred to as a “pipe.” The data gather logic delivers a specific number of bytes at each activation and then at the next activation, picks up where it left off to deliver more bytes.
Receiving a message is a mirror image of the sending of one. Some number of bytes becomes available from a pipe and must be distributed. It would be apparent to those skilled in the art that the concepts involved in sending and receiving are so closely related that to understand one is to understand the other.
The MPI standard was designed for distant communication, i.e., for message passing between tasks executing on separate nodes. This type of message passing is referred to as internode message passing. When a sending and receiving task are located on the same node, message passing can be achieved by intranode message passing. Alternatively, for intranode communication, use of shared memory buffers for inter process or inter task communication is possible. Unfortunately, no provision exists for permitting transparent use of communication from both local sending tasks (intranode) and external sending tasks (internode) to a local receiving task.
U.S. Pat. No. 5,434,975 to Allen (“Allen”), the contents of which are incorporated herein by reference in their entireties, discloses a conventional IPC system. Conventionally, when a plurality of tasks, associated with a common processor node in, e.g., a uniprocessor or a symmetric multiprocessor computer system, sought to communicate with one another, various means of interprocess communication (IPC) could be used. For example, Allen describes a conventional shared memory (only) message passing system including a sender/receiver pair with message queues and “signaling” from the sender to receiver. Allen uses a Unix IPC called a semaphore. The receiver in Allen has only one source of messages, i.e., shared memory. Unfortunately, Allen does not teach or suggest a system that supports message passing where messages originate from two sources where one of the sources is not local so there is a need for something different than a conventional signal, since the IPC signal can only be generated for a local connection.
U.S. Pat. No. 5,652,885 to Reed et al. (“Reed”), the contents of which are incorporated herein by reference in their entireties, discloses a system using a Unix datagram socket as a signaling mechanism, and messages are expected to be communicated entirely via shared memory. The receiver either waits for a select or for a signal. Reed also does not address message passing where messages originate from non-local sources.
U.S. Pat. No. 5,835,764 to Platt, (“Platt”), the contents of which are incorporated herein by reference in their entireties, discloses a “remote-procedure-call-like” mechanism in which various threads are suspended until their dependent (synchronous) functions are completed. Unfortunately, Platt also does not teach or suggest a system or method that handles a multiplicity of message source types (i.e. local and distant).
U.S. Pat. No. 5,469,549 to Simpson, (“Simpson”), the contents of which are incorporated herein by reference in their entireties, discloses a system supporting communication via partitioned shared memory. Unfortunately, Simpson does not teach or suggest any external interfaces.
U.S. Pat. No. 5,313,638 to Ogle (“Ogle”), the contents of which are incorporated herein by reference in their entireties, discloses a system supporting UNIX semaphore synchronization, i.e., message passing into slots controlled by a semaphore. Unfortunately, Ogle does not teach or suggest any support for message passing from external device sources.
It is desired that an improved method be provided to permit transparent receipt of communications to local receiving tasks from local sending tasks and external sending tasks.
SUMMARY OF THE INVENTION
Briefly, the present invention provides a system, method and computer program product for transparently handling messages originating from local shared memory and from an external source. Conventional approaches supported shared memory only, or external source only as the only mechanism. The present invention includes a local sender task putting messages into shared memory, and a distant sender task sending messages via a communications link. The receiver task can initially be waiting for a packet arrival interrupt from the communication link. A hardware interrupt advantageously can call a software service notification function to wake the waiting thread of the receiver task (this waiting thread in one embodiment could be a Dijkstra semaphore). The software service notification function can be provided as part of an operating system (OS) by a kernel function, or more commonly, by a device driver which can support the communication link. The present invention can include adding an additional function to the device driver which can allow the local sender to identify and wake up the waiting receiver task thread, thereby simulating a packet arrival hardware interrupt. When the receiver task thread awakes, it can examine both shared memory and hardware message queues for work to do.
In an example embodiment of the present invention, a method is disclosed for transparently handling message passing from a plurality of local and external source tasks, the method

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Interrupt mechanism for shared memory message passing does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Interrupt mechanism for shared memory message passing, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Interrupt mechanism for shared memory message passing will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3263453

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.