Content caching with special handling of multiple identical...

Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S113000, C711S154000

Reexamination Certificate

active

06775743

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Technical Field
This invention relates generally to content caching, such as caching web pages in the context of web servers on the Internet. More particularly, the invention relates to such caching where multiple requests for the same content from the caching server to the content server(s) are specially handled.
2. Description of the Prior Art
Web browsing on the Internet is perhaps one of the most popular applications of the Internet. Using a web browser program on a client, a user enters in the address of a web site or web page hosted by one or more web servers. The client contacts these web server(s) at this address on the Internet, which is known as a universal resource locator (URL) address. This address typically has the form “http://www.name.suffix/page.htrnl,” where the suffix may be “corn,” “org,” or another suffix. The web server receives the request, and returns content to the client identified by the address. Once the client receives the content, the web browser program interprets it to properly display the content to the user.
A difficulty that has become apparent when using web servers to host web sites is that a web server usually only has the capability of serving so many web pages at a time. For instance, a web server may have the ability to handle 1,000 web page requests per second. However, as the popularity of a web site increases, the web server may have to handle more than its original capability of web page requests. Typical solutions have centered around implementing scalable servers, which can be expanded as need warrants, as well as adding additional web servers, which can be clustered with the original server(s) so that proper load balancing is achieved.
A more recent solution that has seen increased usage is employing a caching server. A caching server is usually located in front of the web server, and may handle incoming web page requests directly from clients, passing on requests to the web server only when necessary. The caching server caches content according to a predetermined approach, such that the caching server can respond to client requests for content that has been cached without passing the requests along to the web server. This alleviates much of the burden placed on the web server, effectively allowing the web server to handle more client requests than if the caching server were not used.
Content typically can be classified as either cacheable or non-cacheable. Cacheable content is generally that which is not particular to a given user's request, and that does not require customization by the web server to properly respond to the user's request. For instance, a news-oriented web site may have content regarding a breaking news story. Regardless of the client that requests this content, the web site returns the same content. Therefore, this content is cacheable, in that the caching server is able to cache the content and return it to any requesting client.
By comparison, non-cacheable content is generally that which is particular to a given user's request, and thus requires customization by the web server to properly respond to the user's request. For instance, a banking web site may have content directed to each of its account holder's accounts. When a client requests account-related content from the site, the content is not applicable to any other client, since each client will request account-related content of its own user. Therefore, this content is non-cacheable, in that the caching server should not cache the content.
A problem with caching servers, however, occurs when multiple client requests for the same content are retrieved in a short time span, before the web server responsible for the content has responded to the first such request. For example, the caching server may receive a client request for cacheable content that has not yet been cached, and pass it along to the web server. Before the caching server receives the content from the web server to cache and return to the requesting client, the caching server receives one or more additional client requests for the same content. Because this content has not yet been cached, the caching server passes along these requests, too, to the web server.
The web server in this scenario is therefore burdened with having to respond to essentially the same request for content a number of times, in contradistinction to the purpose of having a caching server offload this burden from the web server. With each additional client request for the same content passed along to the web server before the web server responds to the original request for the content, the web server's ability to handle any request for content for which it is responsible decreases. That is, with each additional client request that it receives for the same content, there is a longer delay in the web server responding to the initial client request for this content.
This effect is known generally as the positive feedback, or snowball effect, in that the more client requests the web server receives, the worse its performance becomes. Existing caching servers assume that this situation will occur with sufficiently low frequency that it does not affect the ability of the web server to handle content requests. However, this assumption is at best dubious, and even if this scenario occurs infrequently, when it does, it can greatly degrade web server performance, to the detriment of the users and the operator of the web site. For these described reasons, as well as other reasons, there is a need for the present invention.
SUMMARY OF THE INVENTION
The invention relates to special handling of multiple identical requests for content during content caching. A method of the invention receives a request for content. The method performs at least one of two actions. First, in response to determining that the content is cacheable and that a previous request for the content has already been forwarded to a server responsible for the content, the method waits to process the request until it receives a response to the previous request. Second, in response to determining that the content is non-cacheable, the method forwards the request to the server responsible for the content.
A system of the invention includes first and second storages, and a mechanism. The first storage stores cacheable content by identifiers thereof, where the cacheable content is received from one or more content servers. The second storage tracks outstanding requests for content that have been sent to the content servers, by identifiers of the content. The mechanism receives a new request for content that includes an identifier. In response to determining that the second storage is tracking other request(s) also having the identifier of the new request, the mechanism adds the new request to these other request(s) in the second storage.
An article of manufacture of the invention includes a computer-readable medium and means in the medium. The means in the medium is for waiting to process a received request for content, until a response is received to another request for the content that has already been sent to a server responsible for the content. The means is also for forwarding the received request to the server after determining that the content is non-cacheable. Other features and advantages of the invention will become apparent from the following detailed description of the presently preferred embodiment of the invention, taken in conjunction with the accompanying drawings.


REFERENCES:
patent: 6199107 (2001-03-01), Dujari
patent: 6249844 (2001-06-01), Schloss et al.
patent: 6351767 (2002-02-01), Batchelder et al.
patent: 6370571 (2002-04-01), Medin, Jr.
patent: 6427187 (2002-07-01), Malcolm
patent: 6587928 (2003-07-01), Periyannan et al.
patent: 2002/0048269 (2002-04-01), Hong et al.
patent: 2002/0062372 (2002-05-01), Hong et al.
Fielding et al., “Hypertext Transfer Protocol—HTTP/1.1,” RFC 2616, pp. 1-114, The Internet Society, Jun. 1999.*
Wessels et al., “Internet Cache Protocol (ICP), Version 2,” R

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Content caching with special handling of multiple identical... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Content caching with special handling of multiple identical..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Content caching with special handling of multiple identical... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3278643

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.