Electrical computers and digital processing systems: multicomput – Computer-to-computer data modifying
Reexamination Certificate
2000-07-21
2004-11-30
Maung, Zarni (Department: 2154)
Electrical computers and digital processing systems: multicomput
Computer-to-computer data modifying
C707S793000
Reexamination Certificate
active
06826626
ABSTRACT:
FIELD OF THE INVENTION
This invention relates to the field of data retrieval in a content distribution network, and, in particular to data retrieval as it pertains to retrieving data through a geographically dispersed content distribution network such as the World Wide Web.
BACKGROUND OF THE INVENTION
Users of the World Wide Web obtain data on the web through HTTP transactions. Each HTTP transaction consists of a request originated by a user client and a response originated by a server containing the resource which includes the desired information. The request typically indicates the entity being sought, various parameters about what types of responses are acceptable, and, under some styles of requests, the request also includes client-supplied input data. A successful response contains a message indicating the identity of the requester and some informational headers about that response.
The entity (either the request entity or response entity, but typically only the response entity) may be encoded in a fashion that is compliant with the capabilities of the receiving end of the connection as typically indicated in the request. More than one encoding may be applied so long as the order of application is preserved. An entity with no encodings on it is said to have the identity encoding.
The HTTP architecture defines three basic components: user agents, origin servers, and proxies. This architecture is shown in FIG.
1
. The user agent
20
and the origin server
30
form the ends of the chain which may contain any number of proxies
40
as the middle links. One of ordinary skill in the art will understand that the user agent
20
is Internet interface user software such as Microsoft Explorer or Netscape running on a suitable computer. The dots in
FIG. 1
are intended to show that it is possible that there are several proxies in the chain extending from the user agent
20
to the origin server
30
. Under some circumstances, an individual caching proxy may “stand in” for the origin server, thus shortening the chain and improving performance by increasing locality.
A complete HTTP session that encompasses all of the network elements in the transaction is known as an “end-to-end session.” An individual leg of that session (e.g., from user agent
20
to proxy
40
, between two intermediate proxies
40
, or between a proxy
40
and the origin server
30
) is known as a “hop.” The terms HTTP end-to-end and HTTP hop-to-hop are similar in concept to the IP terms of the same names but an end-to-end HTTP session may certainly contain multiple end-to-end IP sessions, and HTTP hops consist of exactly one IP end-to-end session, but may include multiple IP hops.
In conventional usage, and as the term “client” is used herein, any element creating upstream traffic (the traffic carrying the request) is said to be a client. Similarly, in conventional usage, and as the term “server” is used herein, any element creating response-bearing traffic is said to be a server. Thus, in any chain which includes more than just the end elements there are multiple clients and multiple servers. For example, for a request originated by the user agent
20
and relayed to the first proxy
40
, the user agent
20
acts as the client. The first proxy
40
then acts as a client for the next proxy
40
, and so on. Some protocol features may be introduced by any client or server while some may be introduced only by the user agent
20
and/or the origin server
30
at the chain's end points. Protocol features which may be introduced by any client or server are referred to as hop-to-hop features and apply only to a single link of the chain. Protocol operations that may be negotiated only at the end points of the chain are called end-to-end features. These may not be modified by any intermediate proxy although they may be created by a proxy under some circumstances.
Traditional proxy environments focus on using large cache networks to replicate popular data and bring it closer (in terms of network distance) to the user agent
20
that is requesting it. Each proxy
40
has an associated cache. There are circumstances where a proxy
40
may already have in its cache fresh, valid data which would constitute a valid response to the request. In such circumstances, known as a “cache hit”, the proxy can respond to the request without having to forward the request further along to the chain to the origin server
30
. One potential advantage of traditional proxy environments is reduced latency because the data is closer to the client when served from a cache than when obtained from the origin server
30
. Thus the response time is improved. A second potential advantage is increased object availability. Because the data needs to traverse only a smaller number of network links when served from cache, the probability of any link in the total transaction being down is reduced. A third potential advantage is reduced bandwidth requirements. Obviously, if the response does not need to traverse all the links to the origin server
30
then some bandwidth can be saved.
While traditional caching proxy strategies are widely deployed on the Internet today and can confer all of these potential benefits, they generally do not implement any efficiency-oriented transfer encodings. The traditional approaches concentrate on reducing byte count by having proxies which can reuse and/or re-validate fresh documents they have stored on disk. This limits their efficacy to primarily static content environments. Most images and a substantial amount of hand-created markup documents fall into this category. These entities typically do not change between requests except at very long intervals.
There is, however, another type of content frequently encountered on the Web, that being dynamic content. Documents created or altered by an origin server on demand in response to requests are considered dynamic documents. They change very often, if not on every request, and are served with information prohibiting traditional caches from using them in answer to subsequent requests. An example of a dynamic document is, for example, a stock quote server. This type of data is traditionally neglected by existing strategies, but benefits the most from a strategy known as differential encoding or delta encoding.
In traditional delta encoding, a fresh response to a request is compared against an older version of the response and only the difference is sent. This ideally avoids having to resend the entire resource in response to every request. While conventional delta encoding schemes can work well in many circumstances, their actual or proposed implementation does have potential drawbacks. One prominent potential drawback is an excessive proliferation of older versions of the resource in caches throughout a system, especially for resources which change often. Also, most delta encoding schemes are end-to-end schemes. They thus cannot achieve their full potential in circumstances where the most appropriate encodings available on any of the links may not be understood by both end points. In these scenarios the capabilities which the transaction can exploit are limited to the best common capabilities of the end points. Another potential drawback with some implementations of a differential encoding scheme is that they require modification of the one or both of the end points of the chain.
There thus remains in need for a system which can introduce efficiencies in the handling of dynamic data in a proxied environment. At the same time, it is useful for such a system to be transparent to the end points and be able to take advantage of the encoding capabilities of the proxies rather than being limited to the encoding capabilities of the end points. In fact, it is useful to have a system in which no modifications to the endpoints (user agent and origin server) are necessary to achieve increased speed and efficiency.
SUMMARY OF THE INVENTION
These and other ends are achieved in the present invention through its provision of a method of and an apparatus for retrieving data in a cont
Chang Jung-won
Clear Blue Technologies Management, Inc.
Maung Zarni
Potomac Patent Group PLLC
LandOfFree
Method of and apparatus for rapid retrieval of data in a... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Method of and apparatus for rapid retrieval of data in a..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method of and apparatus for rapid retrieval of data in a... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3340473