Bounding delays and reducing threading overheads in caching

Electrical computers and digital processing systems: multicomput – Distributed data processing – Client/server

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C709S241000

Reexamination Certificate

active

06665704

ABSTRACT:

FIELD OF THE INVENTION
This invention relates to retrieving information for client processes and, in particular, to using cache memory to service multiple concurrent requests while maintaining boundaries on the delay to service any one client process.
BACKGROUND OF THE INVENTION
The Internet has experienced phenomenal growth in recent years, in part due to the explosion of e-commerce and its increased use for educational and entertainment purposes. More and more business and consumers are relying on the Internet for information. However, unfortunately, the capacity enhancements of the Internet backbone have not kept up with the Internet's increased usage and geographical span. The result of this is that users experiencing network delays in accessing websites.
To address this problem, many servers, e.g., proxy servers, have begun to cache documents retrieved from web servers to speed access to their clients for subsequent requests for the document. These servers make a copy of the requested documents received from web servers and service all their local client requests by sending them this copy. A problem occurs when many clients request the same document from a web server, concurrently. How to satisfy these requests without unduly burdening a client is a nontrivial problem. The server servicing the local clients is further hampered by not knowing the size of the document that is being requested. Trying to store a document in the cache while serving it to multiple concurrent requests is difficult. Furthermore, trying to place boundaries on the delays any one client may experience while reducing the overhead processing complicates the matter even more.
Systems and methods have been established in an attempt to address these problems.
FIG. 1
depicts a system of downloading a document from a server process and sending it to browser clients. The browser clients
115
and
125
request information from the web server
105
. The cache entry
110
is located in a memory to be quickly accessed for a browser client. For example, it may be located in a proxy server's cache that is servicing multiple local clients including the browser client
115
. Only two browser clients and one server are shown, however one skilled in the art will recognize that many browser clients may be desiring the same information from multiple web servers
105
.
The Thread
1
120
is created by the thread processing component
103
in response to browser client
1
115
requesting information, or a document, from the web server
105
. A thread contains the instructions necessary to retrieve the information from the web server for a client that thread is associated with. The thread processing component
103
is contained in an interface between browser clients
115
,
125
and a web server
105
and has access to a cache entry
110
. For example, a proxy server that interfaces with a browser client and a web server may contain the thread processing component, creating a thread in response to a client's request.
Continuing with
FIG. 1
, Thread
1
120
functions as a producer and consumer thread. As a producer, Thread
1
retrieves the client requested information from the web server
105
. When Thread
1
120
receives the requested information from the web server
105
, it will also place this information in a cache entry
110
. As a consumer, it places the web server supplied information in the input stream for it's associated client, i.e., browser client
1
115
. Also, when operating as a consumer thread, a thread may retrieve the requested information for a client directly from the web server, or from a secondary source, such as a cache.
Browser client
2
125
also requests the same information as requested by browser client
115
. However, because browser client
1
115
already created a thread to act as a producer and consumer, Thread
2
130
, created in response to browser client
2
's
125
request, will only function as a consumer. Thread
2
130
will retrieve the requested information from the cache entry
110
after it has been placed there by Thread
1
120
.
In this approach, however, all the information requested, e.g., an entire document, is placed in the cache entry
110
prior to the either of the threads placing, or transmitting, any information to the output streams for their respective clients. While this solution is simple and easy to manage in its approach, this solution suffers in that clients do not receive any data until all the information has been downloaded from the web server. Consequently, there is a large initial delay before any client receives any of the requested information from the web server. This is a problem because many browser clients and/or users may not be aware that the system is working and may terminate a download not knowing that the system is actually processing the downloaded request.
FIG. 2
depicts a system of downloading a document in segments, or data blocks, for client processes. Similar to the
FIG. 1
approach, a producer/consumer thread
205
(“Thread 1”) is created when browser client
1
115
makes a request for a document contained at the web server
105
. In addition, a consumer thread is created “Thread 2”
230
for browser client
2
125
requesting the same information that browser client
1
requested. However, in this instance, Thread
1
205
downloads and stores data retrieved from the web server as smaller data blocks
218
in the cache entry
215
instead of as one complete document. As Thread
1
retrieves a smaller data block, it is placed in a data block
218
in the cache entry
215
and it is sent to browser client
1
115
via the client
1
output stream. In addition, when the data block is placed in the cache entry
215
, Thread
2
will retrieve the data block from the cache entry
215
and send it to its client
125
via the client
2
output stream.
This approach solves the problem of the large initial delay, indicative of the previous approach, that a browser client suffers before receiving any of the data from the web server. In this case, as soon as a block of data is received from the web server, it is placed in the output stream so the browser client receives an indication, i.e., data from the web server
105
, that the request is being processed without having to wait for the entire document to be downloaded.
However the problem with this approach is that the browser client which makes the request dictates the speed at which the cache entry
215
is filled. In
FIG. 2
, Thread
1
205
controls the speed at which information is retrieved from the web server
105
. Thread
1
205
will not request another data block from the web server until browser client
1
115
(associated with Thread
1
) receives the previously retrieved data block from the web server. Consequently, if browser client
2
125
can retrieve the data block, from the cache, faster than the browser client
1
115
can retrieve the data block sent from Thread
1
205
, browser client
2
125
will incur idle time waiting on browser Client
1
115
to complete receiving previous data block. Again, this is because Thread
1
205
is in control of the downloading information from the web server
105
and Thread
2
230
is only acting as a consumer to send the information to its client.
To address this problem of having one consumer thread and consequently one browser client dictate the speed of the download for other browsers, another approach was developed which creates multiple threads to control the downloading from the web server to the cache and another thread to control the downloading from the cache to the browser client.
FIG. 3
depicts a system of using multiple threads as both a producer and a consumer.
This approach isolates the activity of a producer thread, receiving the information from the web server, from that of a consumer, receiving the information from the cache to the browser client. In this solution, an independent producer thread
310
is created that is responsible for filling the cache entry
215
with data blocks of information

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Bounding delays and reducing threading overheads in caching does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Bounding delays and reducing threading overheads in caching, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Bounding delays and reducing threading overheads in caching will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3181772

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.