Real-time channel-based reflective memory based upon...

Electrical computers and digital processing systems: multicomput – Multicomputer data transferring via shared memory – Accessing another computer's memory

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C709S213000, C709S212000, C711S147000, C710S027000, C710S054000

Reexamination Certificate

active

06640245

ABSTRACT:

FIELD OF THE INVENTION
The present invention is related to computer networks for supporting real-time communication. The present invention is more particularly related to computer networks supporting distributed real-time applications, such as distributed industrial control applications.
BACKGROUND OF THE INVENTION
Some computer applications require a computer network which supports distributed processing. Some distributed applications, particularly industrial control applications, also may require support for real-time communication and control over the computer network. Some support for data sharing by multiple distributed processes may also be needed. While conventional commercially-available computer network systems can support distributed processing and data sharing, many of these conventional computer network systems do not have mechanisms which guarantee timeliness of communication of data for real-time applications.
Some computer memory systems are available which support distributed data sharing in a computer. For example, a distributed shared memory (DSM) provides an illusion of global virtual memory to all computers using the shared memory. It allows for concurrent write operations on different nodes in the network. Such a DSM system, however, requires coherency protocols, consistency models, and other complexities which may not be needed to support distributed real-time processing. Additionally, there is no time constraint guarantee associated with shared memory.
Another computer memory system that supports data sharing is called reflective memory. Such a memory system reflects, or replicates, data to all nodes of a computer network in a bounded amount of time. These reflective memory systems are based on a ring topology and can support only a limited physical memory size. However, data typically does not need to be distributed to all nodes in a system to support distributed real-time processing, but only a few.
Another computer memory system which provides low latency and high performance communication in clustered parallel computing systems is called memory channels. A memory channel is shared by applications which can directly read and write on the memory channel. A memory channel, however, can support only a limited number of nodes and a limited distance between nodes. However, for real-time communication between many nodes, data updates need specific time bounds and frequency which are not supported by a memory channel architecture.
None of these computer systems provide a flexible and scalable architecture which supports communication with specified time limits, i.e., which supports efficiently real-time communication. Accordingly, a general aim of the present invention is to provide a computer network system which supports distributed real-time processing, by providing data reflection with guaranteed timeliness, but also with flexibility and scalability.
SUMMARY OF THE INVENTION
The present invention guarantees timeliness to distributed real-time applications by allowing an application to specify its timeliness requirements and by ensuring that a data source can meet the specified requirements. A reflective memory area initially is established by a data source, an application, or any other system entity. A data source maps to this reflective memory area and writes data into it. In order to receive data from this data source, an application requests attachment to the reflective memory area to which the data source is mapped and specifies timeliness requirements. The application may specify that it needs data either periodically or upon occurrence of some condition. The application allocates buffers at its local node to receive data. The data source then establishes a data push agent thread at its local node and a virtual channel over the computer network between the data push agent thread and the application attached to its reflective memory area. The data push agent thread transmits data to the application over the virtual channel according to the timeliness requirements specified by the application.
The present invention simplifies data sharing and communication by utilizing the typically unidirectional pattern of data sharing and communication. For example, plant data, e.g., in the form of either numerical or video data, typically is sent from a plant controller to an operator station, and control data typically is sent from an operator station to a plant controller. Additionally, a single writer, multiple reader model of communication is typically sufficient. That is, all data does not need to be transmitted to all of the nodes in a computer network all of the time. Thus, by using channels flexibility, switchability and scalability can be provided between reader and writer groups. Scalability is provided by using channels to control data reflection and to represent the unidirectional access pattern. By using an asynchronous transfer mode network, flexibility in channel establishment and cost reduction may be provided.
Accordingly, one aspect of the invention is a computer network system for supporting real-time communication. A first source of data is connected to the computer network, having a memory containing a reflective memory area into which data is written at a first rate. This first source is responsive to a request, to establish a mechanism for periodically transmitting data over a second channel on the computer network at a second rate slower than the first rate and a message size defined by the request. The data rate may be periodic, upon update, or conditional. A second source of data is connected to the computer network having a memory containing a reflective memory area into which data is written at a third rate. The second source is responsive to a request to establish a mechanism for periodically transmitting data over a first channel on the computer network at a fourth rate slower than the third rate and a message size defined by the request. A first destination of data is connected to the computer network and has a mechanism for allocating a buffer memory for receiving data over a channel on the computer network from one of the first and second sources. The buffer memory has size determined by the message size and a number of copies of data that may be read from the buffer memory. A second destination of data is also connected to the computer network, and allocates a buffer memory for receiving data over a channel on the computer network from one of the first and second sources. The buffer memory has a size determined by a message size and a number of copies of data that may be read from the buffer memory.
Another aspect of the invention is a computer network system having first and second source computers. A first destination computer is connected to the computer network and has a memory. The first destination computer sends a request to one of the first and second sources to establish a data path between the source and the first destination computer, wherein the request includes an indication of a rate and a message size. The data rate may be periodic, upon update, or conditional. A buffer is established in the memory of the first destination computer having a size more than twice the message size indicated by the request. A second destination computer is connected to the computer network and has a memory. The second destination computer sends a request to one of the first and second sources to establish a data path between the source and the second destination computer, wherein the request includes an indication of a rate and a message size. The second destination computer establishes a buffer, in the memory of the second destination computer, having a size at least twice the message size of the request. First and second source computers are connected to the computer network and include a memory. They are responsive to a request from one of the first and second destination computers indicating a data rate and message size, to establish a mechanism for periodically transmitting data over a channel in the network to the destination com

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Real-time channel-based reflective memory based upon... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Real-time channel-based reflective memory based upon..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Real-time channel-based reflective memory based upon... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3125371

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.