Methods, systems, and machine readable programming for...

Electrical computers and digital processing systems: multicomput – Computer-to-computer data routing – Least weight routing

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C709S230000, C709S227000, C709S203000

Reexamination Certificate

active

06609159

ABSTRACT:

FIELD OF THE INVENTION
The present invention relates to client-server network computing, and, in particular, to methods, systems, and machine readable programming for interposing front end servers between servers and clients in such network computing.
BACKGROUND OF THE INVENTION
As the use of computer networks expands, client-server computing has become increasingly important. In client-server computing client computers access, over a computer network, information and computational resources available on a server computer. In client-server computing, the server computer runs one or more server software programs which provide clients access to centralized resources. These server programs are typically pre-written object code programs which are purchased or acquired from a third party. Thus, they usually cannot be directly modified by their end users. Programs typically called front end servers or middle-ware exist for providing enhancements to these servers, but they require the administrator of a particular computer to reconfigure the server's parameters and to support the resulting increase in system complexity.
The World Wide Web (Web) on the Internet is an extremely popular large scale networked system consisting largely of Web client programs on client computers, and Web server programs on server computers. In addition to server programs, most Web server computers also contain application server programs, such as CGI programs, which are called by server programs to handle certain types of client requests; and stored data, including Web pages and graphic files. It is possible for one computer to function both as a client and a server computer.
In the Web, client programs are commonly called browsers and the two most popular are Netscape Navigator and Microsoft Internet Explorer. The server programs are called Web servers (as opposed to the many of other server program types on the Internet, such as FTP servers, mail servers, etc.) and they can host one or more Web sites on each computer. Some of the more popular Web servers are the free Apache server and commercial ones from Netscape and Microsoft.
Like any heavily used program, the efficiency of Web servers is a major concern of the managers and administrators of Web sites. The reasons for concern are several. Web users have come to expect a certain level of dynamic response from the Web. Such users who perceive a site to be slow often will not visit it again. Sometimes a very heavily loaded Web server system can almost stop responding as it thrashes about doing I/O and doing very little useful work. When a server is thrashing, requests are accepted and serviced at a lower rate further degrading performance in a vicious cycle. Extreme load on a server can result in stability problems, crashes, and increased cost of maintaining the server.
There are many causes for inefficiency of Web servers. The following are some of the major ones and some of the current solutions for them.
Each request to a Web server from a browser using the HTTP 1.0 protocol, currently in common use, requires a new TCP/IP connection. This slows down the Web server, because it requires the server to accept and manage more connections. It also slows down the Internet itself, because it requires more information to be transported to establish the new connections for each separate client request. This problem is made worse by the fact that each separate object displayed in a web page, such as each separate graphic image, is usually obtained by a separate client request, meaning that accessing one web page commonly requires many separate connections.
The new HTTP/1.1 protocol helps solve this problem by multiplexing requests and responses from a given client over a single client connection. Some Web servers now being sold support this new protocol, but not all do. Furthermore, a large number of servers currently in use do not support this new protocol, and will need to be upgraded to do so. These upgrades can be costly in terms of purchase cost, administration expense, and down time. Such costs could slow the usage of the HTTP/1.1 protocol, or of any new, improved protocol.
Another factor which slows server performance, is the lack of caching. Many servers in current use, and even some still being sold, read each Web file requested by a client into memory each separate time it is requested. A caching Web server would normally keep a copy of many requested web data objects in memory. If a client request is for one of the data objects stored in its memory, the server saves the time of having to read the requested data object from disk into memory, and it can send a copy of the page directly from the memory to the requesting client, saving considerable time.
There are a wide variety of caching Web servers on the market, some free and some commercial. A few are integrated with the Web servers itself. Most are separate servers which act as a caching front end, which is interposed between clients and the server they work with, which we will call a back end server. These front end servers have to transfer non-cached requests to, and do I/O between clients and, the back end server.
A difficulty with prior caching front ends is that they normally require the back end server to be reconfigured to communicate with them, instead of directly with clients. Such back end reconfiguration can also require modification of the Web pages and CGI programs used by the back end, because the caching front end has to use the TCP numbers previously used by the back end, requiring that the back end used new TCP number. Such reconfiguration of the back end may also require routing information and other network configurations to be changed.
CGI stands for Common Gate Way Interface. It enables a Web server to call programs, often referred to as CGI scripts, to handle requests by a client which involve more than the mere sending of previously recorded data objects. A Web server typically handles a CGI request by forking off a new process of the desired CGI program and piping data to and from that new process. It is often desirable to maintain “session affinity” with CGI scripts, that is, to cause each of a successions of requests from a given client to be handled by the same forked CGI process. This is important because CGI scripts often handle the processing of individual transactions over the Web which require related multiple client requests, such as the filling out of multi-page registration forms or the placing of multiple articles into an electronic shopping cart and then purchasing them with a credit card. Many prior art servers, particularly those using HTTP 1.0 protocol maintain session affinity by use of cookies, i.e., data recorded in a client which the client will place in the headers of subsequent requests to the server to uniquely identify the client as one to be associated with a given CGI process. Unfortunately, using cookies to maintain session affinity requires the extra overhead of writing cookie information to clients, keeping track of which client has been sent which cookie, the parsing the cookie information from a client request, and looking that information up in the cookie/CGI-session data base to determine to which CGI session a given request should be sent.
There are many other functional improvements which could be made to many servers, whether they be Web servers or other types of servers. But the ability of those who operate servers to make such improvements is often hindered by the fact, referred to above, that making changes to a server computer's configuration can be costly. New server programs costs money. Perhaps even more daunting is the fact that changing server programs can often require that many time consuming changes be made to web pages and CGI scripts. The prior art has used front end servers to increase the functionality of back end servers, but usually at a considerable cost in terms of the reconfiguration required to use the added functionality such front end servers provide.
SUMMARY OF THE INVENTION
This invention relates to meth

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Methods, systems, and machine readable programming for... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Methods, systems, and machine readable programming for..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Methods, systems, and machine readable programming for... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3107439

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.