Concurrent server and method of operation having...

Electrical computers and digital processing systems: multicomput – Distributed data processing – Client/server

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C709S217000, C709S218000, C709S241000, C709S241000, C705S075000

Reexamination Certificate

active

06195682

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
This invention relates to distributed information systems. More particularly, the invention relates to concurrent servers and methods of operation having client-server affinity.
2. Description of Prior Art
A concurrent server is a gateway in a distributed information system, e.g., local area network, the internet, and consists of a parent process and a number of child processes. Parent/child processes are well known terms describing a relationship between processes in a multitasking environment in which the parent process calls a child process spawned by the parent and suspends its own operation until the child process aborts or is completed. Parent/child processing is further described in the text “Power Programming With RPC” by J. Bloomer, published by O'Reilly & Associates, Inc. Sebastopol, Calif., 1992 at pages 152-160. Typically, the parent process waits on a well-known port for client requests and forwards these requests to child processes. The child processes actually provide the services to the client. Traditional client-server applications involve a “pool controller” process which dispatches client requests to server processes which are assigned to service the request. One problem with “pool controllers” is service may become unacceptable when a delayed response occurs due to the rate of client requests exceeding the rate at which the controller can process client requests. Similarly, in the case of client-server applications involving “state-less” concurrent servers, the server processes are forced to maintain client-specific application state-data in a shared memory/persistent memory area and refresh them for every service request from a particular client. The requirement of refreshing the state data results in a performance penalty for concurrent servers.
What is needed to avoid the performance penalty from delayed response time or refresh time is a concurrent server and method of operation which provides client-server process affinity whereby a client is automatically reconnected to the server last servicing the client's previous service request.
Prior art related to concurrent servers includes the following:
U.S. Pat No. 5,388,214 issued Feb. 7, 1995, discloses a parallel computer system including a request distribution network for distributing process requests to selected sets of processors in parallel. The network includes a plurality of request distribution nodes. Each node is connected to receiving process requests from and to couple process data to a parent, each request distribution node is further connected to couple processing requests to and receive process data from selected children. Each request distribution node in response to request address information receives from its parent, identifies a selected one of its children, and thereafter couples further request address information which it receives and processing requests in parallel to its children.
U.S. Pat. No. 5,544,316 discloses a system and method for optionally registering a local process to allow participation in a single system semantic. A mechanism is provided to allow processes to register as part of a distributed context. Registration may be accomplished explicitly by any child process to name its parent in the hierarchy, or automatically by the parent process when spawning children. This mechanism allows the destination nodes in the distributed environment may be selected at run time without requiring knowledge on the part of the executing process. This allows the ability to cause processes to be distributed based on machine type, facilities available, or load-balancing considerations.
U.S. Pat. No. 5,341,477 issued Aug. 23, 1994, discloses a broker mechanism which allocates a plurality of servers, each server having available resource capacity to a plurality of clients delivering one of several services to the clients. The broker operates by monitoring a subset of all available servers capable of delivering the requested service. The broker receives client requests for the services and based on a network policy and available resource capacity, suggests one of the servers, monitors it for the particular service to one of the clients making a request. The selected server enforces its local policy of not allowing any connections exceeding its available resource capacity.
U.S. Pat. No. 5,560,008 issued Sep. 24, 1996, discloses a remote authentication and authorization process in a distributed data processing system. A set of credentials for the process are created at a server in response to a message from a client requesting a service. The server returns a credentials ID identifying the created set of credentials to the client process. The client uses this returned ID in subsequent requests and is authorized request as controlled by the set of credentials identified by the returned ID in a subsequent request. The server can deny access to the service by the process if the ID returned in a subsequent request is determined by the server not to identify the set of credentials. The server denies the access if the server requires an authentication for the process.
U.S. Pat. No. 5,761,507 discloses concurrent servers within a server with a transaction manager which intercepts all requests for service from any of a plurality of clients; establishes connections independently of task requests and assigns available requests in the order the requests are received to provide equitable distribution of service resources over an increased number of client/server connections. The transaction manger places all tasks in a queue and starting and stopping additional serves based on queue length.
U.S. Pat. No. 5,617,570 discloses a server which includes a dispatcher, a plurality of worker tasks, and a plurality of dispatcher shared memory areas wherein each dispatcher shared memory area corresponds to a worker task. The dispatcher is responsive to an operation call from a call mechanism for selecting a worker task to execute the operation call, receiving the operation call directly into the dispatcher shared memory space of the worker task and indicating to the dispatcher to the worker task that the operation call has been assigned to the worker task. After completion of the worker task the dispatcher provides the results of the of the operation request directly from the dispatcher shared memory space of the worker task to the connection mechanism.
None of the prior art discloses a system and method for handling multiple client service requests using a broker to allocate service requests to different servers whereby a client is reconnected to the same server last serving the client, the client having the server key that last served it and the server having a matching client key that it last used in serving the client thereby eliminating any system performance penalty from delayed response time or refresh time in processing client requests.
SUMMARY OF THE INVENTION
An object of the invention is a concurrent server system and method of operation that provides client-server affinity upon reconnection of the client to the server which previously served the client.
Another object is a concurrent server system and method of operation for handling multiple client service requests without the need to store and refresh client-specific application data.
Another object is a concurrent server system and method of operation for keying a client to the last server previously serving the client upon reconnection.
Another object is a concurrent server system and method of operation for allocating client service requests among a plurality of concurrent servers.
These and other objects, features and advantages are achieved in a concurrent server system including a plurality of current servers coupled to a plurality of clients through a broker that assigns the servers to a queue operating on a First-In/First-Out (FIFO) basis. Upon a client (C
1
) service request, the broker assigns the request to the server (S
1
) at the top of the queue and generates a server key (S

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Concurrent server and method of operation having... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Concurrent server and method of operation having..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Concurrent server and method of operation having... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2587466

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.