Operation-partitioned off-loading of operations in a...

Electrical computers and digital processing systems: multicomput – Computer-to-computer data routing – Least weight routing

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

Reexamination Certificate

active

06275867

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
This invention relates generally to distributed computer processing systems and, more particularly, to management of server operations in distributed processing systems.
2. Description of the Related Art
A distributed computer processing system comprises two or more nodes connected through a communication link called a network. A processing system can be placed at each node of the network and can include one or more computer machines having a central processing unit (CPU). It is desirable to permit a computer machine at one node of the network to gain access to data files located at a remote node of the network. The term “client” is generally used to refer to a processing system that might desire access to a data file and the term “server” is generally used to refer to the processing system at a node where a desired data file is located. Often, distributed computer processing systems include dedicated servers that have no function other than to satisfy data file access requests from clients at the same node or at different nodes of the system.
A data file is a named set of bytes or records that are stored and processed as a unit by a process. A process comprises a set of program instructions that can be stored in addressable program memory storage of a computer machine and loaded into CPU registers that cause the instructions to be executed. A process whose instructions are being executed is said to be running or to be current. A data file that is being accessed (and therefore is potentially being modified) by a process is said to be open. The data file is otherwise said to be closed. Each node of a distributed computer processing system includes one or more operating system processes that provide an operating environment in which program execution, data transfer, and interprocess communication can take place.
Generally, the available computer machine memory is insufficient to provide an actual memory location at which every program instruction and desired data file record can be stored. Multiple processes and the data files they utilize can share the adressable memory available to a computer machine by the concept of virtual storage, which exists relative to an addressable address space of the computer machine. Virtual storage defines an address space of a computer machine memory to comprise fictitious (or “virtual”) memory locations at which program instructions and data files can be stored. Each virtual location of the address space is temporarily mapped onto an actual physical computer memory location that is used only while a process is running or a data file is open and actively using that portion of memory. When a process is not running or a data file is not open, it is stored in an auxiliary storage device, such as a disk drive.
Thus, the virtual storage address space is not limited by the actual number of memory locations of computer machine memory. Rather, virtual storage is limited only by the addressing scheme of a computer machine and the amount of auxiliary storage available. As a result, a distributed computer processing system can include a vast number of processes being executed in an essentially simultaneous fashion. Such concurrent processes can request data file access from servers at a very high rate.
To facilitate communication between the various processes and network users, the distributed computer processing system typically provides an operating environment that includes pipes. Pipes are data structures that are used by processes to provide a means of storing data on a first-in-first-out (FIFO) basis so the data can be shared among the processes of an operating system. That is, a portion of a running user process (also called an application program) creates output data that it writes to a pipe and another portion of the same, or a different, user process reads the data from the pipe. Pipes permit processes to read and write data to and from some shared media, such as a common server memory, and permit such data to be shared with other processes. The operating system that supports such pipes typically includes read-write synchronization features to provide orderly read and write activity between processes. For example, a process might wait for a pipe write operation to occur before reading from the pipe.
Many operating systems for distributed system application support both named pipes and unnamed pipes. Unnamed pipes typically are implemented through storage queues or memory buffers to support local, tightly coupled communications within a processing system at a single network node. Named pipes typically are implemented as defined data objects in that they comprise object names with which data can be associated. The object names provide a reference for processes and therefore named pipes can support more flexibly coupled communications with more distant, remote network recipients.
Two or more processes communicate with named pipes by agreeing on a pipe name, defining a pipe by that name, and eventually opening a pipe having that defined name. As each process carries out such opens, as well as subsequent pipe reads and writes, the pipe operations are coordinated by the pipe server such that the pipe operations are synchronized between the participating processes. This synchronization through a named pipe data object, and passing of pipe data through the mutually opened named pipe data object, allows effective inter-process communications. This synchronized communication is entirely based on the selection of a name known by the participating application processes and the definition of a pipe by that name in a common server repository at a network node.
It is not necessary nor desirable that the pipe server permanently store the pipe data as it would file data. Pipe data is transitory and typically is simply stored in server memory only while the named pipe is open. When all instances of one particular named pipe are closed, the associated data is discarded. That is, memory for holding the pipe data and its status can be freed. This is different from normal file data, which is retained in permanent storage (such as occurs when data is written to a direct access storage device (DASD), including disk drives and the like).
Despite the difference between data files and named pipes relative to the permanent storage of data, they do have in common the concepts of:
1. being named objects, which are objects that are defined in a server where processes can share them and are objects that require permanent storage of information about the object definition (called metadata) in the server repository; and
2. allowing a set of functional operations, including open, write, read, and close operations, by processes that share these objects.
Named pipe operations typically involve transfer of relatively small amounts of data that are necessary to support data file read and write activities. For example, a pipe might be used to communicate the name and storage address location of a data file to be used by a process. The transfer of such a small amount of data consumes almost as much in the way of system resources for each pipe read or write operation as larger data transfer operations of the server. Thus, pipe operations can interface with primary data file transfer and handling operations of a server and can thereby adversely affect server efficiency.
System performance can be improved by storing information about objects, such as data files, within a local cache of a processing system at a client node, or at least in local cache of a network node relatively close to the client node. The second data can include not just information about data objects but also can include the data objects themselves. Such a scheme is called local data object caching.
Local data object caching can be very effective in minimizing server communications from the server at which a data object is stored, called the owning server. Local caching, however, still can eventually require processing of cached data objects by the owning server when

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Operation-partitioned off-loading of operations in a... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Operation-partitioned off-loading of operations in a..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Operation-partitioned off-loading of operations in a... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2459270

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.