Methods and apparatus for caching network data traffic

Electrical computers and digital processing systems: multicomput – Computer-to-computer protocol implementing – Computer-to-computer data transfer regulating

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C709S241000, C709S200000, C709S217000, C709S225000, C709S228000

Reexamination Certificate

active

06240461

ABSTRACT:

BACKGROUND OF THE INVENTION
The present invention relates to transmission of data in a network environment. More specifically, the present invention relates to methods and apparatus for improving the efficiency with which data are transmitted over the Internet.
Generally speaking, when a client platform communicates with some remote server, whether via the Internet or an intranet, it crafts a data packet which defines a TCP connection between the two hosts, i.e., the client platform and the destination server. More specifically, the data packet has headers which include the destination IP address, the destination port, the source IP address, the source port, and the protocol type. The destination IP address might be the address of a well known World Wide Web (WWW) search engine such as, for example, Yahoo, in which case, the protocol would be TCP and the destination port would be port
80
, a well known port for http and the WWW. The source IP address would, of course, be the IP address for the client platform and the source port would be one of the TCP ports selected by the client. These five pieces of information define the TCP connection.
Given the increase of traffic on the World Wide Web and the growing bandwidth demands of ever more sophisticated multimedia content, there has been constant pressure to find more efficient ways to service data requests than opening direct TCP connections between a requesting client and the primary repository for the desired data. Interestingly, one technique for increasing the efficiency with which data requests are serviced came about as the result of the development of network firewalls in response to security concerns. In the early development of such security measures, proxy servers were employed as firewalls to protect networks and their client machines from corruption by undesirable content and unauthorized access from the outside world. Proxy servers were originally based on Unix machines because that was the prevalent technology at the time. This model was generalized with the advent of SOCKS which was essentially a daemon on a Unix machine. Software on a client platform on the network protected by the firewall was specially configured to communicate with the resident demon which then made the connection to a destination platform at the client's request. The demon then passed information back and forth between the client and destination platforms acting as an intermediary or “proxy”.
Not only did this model provide the desired protection for the client's network, it gave the entire network the IP address of the proxy server, therefore simplifying the problem of addressing of data packets to an increasing number of users. Moreover, because of the storage capability of the proxy server, information retrieved from remote servers could be stored rather than simply passed through to the requesting platform. This storage capability was quickly recognized as a means by which access to the World Wide Web could be accelerated. That is, by storing frequently requested data, subsequent requests for the same data could be serviced without having to retrieve the requested data from its original remote source. Currently, most Internet service providers (ISPs) accelerate access to their web sites using proxy servers.
Unfortunately, interaction with such proxy servers is not transparent, requiring each end user to select the appropriate proxy configuration in his or her browser to allow the browser to communicate with the proxy server. For the large ISPs with millions of customers there is significant overhead associated with handling tech support calls from customers who have no idea what a proxy configuration is. Additional overhead is associated with the fact that different proxy configurations must be provided for different customer operating systems. The considerable economic expense represented by this overhead offsets the benefits derived from providing accelerated access to the World Wide Web. Another problem arises as the number of WWW users increases. That is, as the number of customers for each ISP increases, the number of proxy servers required to service the growing customer base also increases. This, in turn, presents the problem of allocating packet traffic among multiple proxy servers.
It is therefore apparent that there is a need for new techniques which improve the efficiency with which data are transferred between platforms. It is also desirable that such techniques be transparent to the end users.
SUMMARY OF THE INVENTION
According to the present invention, methods and apparatus are provided which facilitate the transmission of data between platforms interconnected by any of a variety of network environments. Essentially, the present invention represents an improvement over the proxy server model which is transparent to end users, high performance, and fault tolerant. By altering the operating system code of an existing router (such as those available from Cisco Systems Inc.), the router is enabled to redirect data traffic of a particular protocol intended for a specified port, e.g., TCP with port
80
, to one or more caching engines connected to the router via an interface having sufficient bandwidth such as, for example, a 100 baseT interface. If there are multiple caching engines connected to the cache-enabled router, the router selects from among the available caching engines for a particular request based on a simple algorithm according to which a particular group or “bucket” of addresses is associated with each caching engine.
The caching engine to which the request is re-routed “spoofs” the requested destination platform and accepts the request on its behalf via a standard TCP connection established by the cache-enable router. If the requested information is already stored in the caching engine it is transmitted to the requesting platform with a header indicating its source as the destination platform. If the requested information is not in the caching engine, the caching engine opens a direct TCP connection with the destination platform, downloads the information, stores it for future use, and transmits it to the requesting platform. All of this is transparent to the user at the requesting platform which operates exactly as if it were communicating with the destination platform. Thus, the need for configuring the requesting platform to suit a particular proxy configuration is eliminated along with the associated overhead. Moreover, traffic may be easily allocated among as many caching engines as become necessary.
Thus, the present invention provides a method for facilitating data transmission in a network. A first data request is received at a first intermediate platform, the first data request indicating a source platform and a destination platform. The first data request is redirected by the first intermediate platform to a first cache platform associated with the intermediate platform. Data corresponding to the first data request is transmitted from the first cache platform to the source platform. The data indicates origination from the destination platform.
A further understanding of the nature and advantages of the present invention may be realized by reference to the remaining portions of the specification and the drawings.


REFERENCES:
patent: 4827411 (1989-05-01), Arrowood et al.
patent: 4965772 (1990-10-01), Daniel et al.
patent: 5452447 (1995-09-01), Nelson et al.
patent: 5511208 (1996-04-01), Boyles et al.
patent: 5555244 (1996-09-01), Gupta et al.
patent: 5611049 (1997-03-01), Pitts
patent: 5673265 (1997-09-01), Gupta et al.
patent: 5751971 (1998-05-01), Dobbins et al.
patent: 5774660 (1998-06-01), Brendel et al.
patent: 5787470 (1998-07-01), DeSimone et al.
patent: 5848241 (1998-12-01), Misinai et al.
patent: 5852717 (1998-12-01), Bhide et al.
patent: 6006264 (1999-12-01), Colby et al.
Martin Arlitt, Ludmila Cherkasova, John Dilley, Rich Friedrich and Tai Jin, OF Hewlett-Packard Laboratories, “Evaluating Content Management Techniques for Web Proxy Caches”, Apr. 30, 1999, 9 pages.
Valloppillil, V

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Methods and apparatus for caching network data traffic does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Methods and apparatus for caching network data traffic, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Methods and apparatus for caching network data traffic will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2483489

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.