Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories
Reexamination Certificate
1999-03-22
2001-08-07
Nguyen, Hiep T. (Department: 2187)
Electrical computers and digital processing systems: memory
Storage accessing and control
Hierarchical memories
C711S129000, C711S134000, C711S136000, C709S203000, C709S219000
Reexamination Certificate
active
06272598
ABSTRACT:
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention pertains to data access network systems (e.g., Internet/Intranet systems). More particularly, this invention relates to partitioning a cache such that different cache replacement policies are employed for different storage areas of the cache that store different kinds of objects to improve performance of the cache.
2. Description of the Related Art
As is known, access to Web documents via the Internet is generally structured around the Hyper Text Transfer Protocol (HTTP). The HTTP protocol is a request-and-response protocol. When a user at a client device (e.g., a personal computer) designates a particular Web page, at least one request is generated. The number of requests is dependent upon the sophistication of the designated Web page. Often, a Web page is formed from a number of data files, such as text files, graphics files, audio files, and video files. Each of the files is referred to as an “object”. A multi-object page is aesthetically pleasing, but each object requires a separate request and a separate response. Therefore, the time for each request-and-response round trip plays a role in determining the total time a user must wait to view the complete Web page.
The total latency in downloading a Web page or other Internet document (e.g., a FTP file) depends on a number of factors, including the transmission speeds of communication links between a client device and a server on which the requested file is stored (i.e., content server), delays that are incurred at the server in accessing the document, and delays incurred at any intermediate device located between the client device and the content server. The intermediate devices may include proxy servers and routers that are between the client devices and the content server that stores the Web page file. An important benefit of a Web proxy server is the ability to cache objects received from the remote content servers. This allows the cached objects to be quickly retrieved and sent to the client devices if they are again requested. As is known, some of the cached objects may be requested by the same or different client device at later times. The caching operation of the Web proxy server will be described below, with reference to FIG.
1
.
As can be seen from
FIG. 1
, when a client device or user terminal
12
generates a request for a particular object (e.g., the object
10
stored in the remote server
18
), the cache of the proxy server
16
in the local server
14
is searched to determine whether the object
10
is stored at the proxy server
16
. If the object is not found in the cache of the proxy server
16
, a “cache miss” results and the local server
14
directs the request to the remote server
18
via the Internet
20
. As can be seen from
FIG. 1
, the remote server
18
stores the requested object
10
. Once the remote server
18
receives the request, it directs a response with the requested object
10
to the client device
12
via the local server
14
. During this process, the requested object
10
is also cached in the proxy server
16
of the local server
14
. This eliminates the need for the local server
14
to send another request to the remote server
18
for the same object
10
at a later time when either the same client device
12
or a different client device (not shown) requests for the same object
10
. When the object
10
is again requested, the proxy server
16
is accessed and a “cache hit” results. In this case, the cached object
10
is quickly forwarded to the client device directly from the proxy server
16
. This eliminates delays encountered in communicating between the proxy server
16
and the remote server
18
. By storing copies of objects received from remote sites, the proxy server
16
reduces the number of requests that are directed to the remote server
18
, as well as the traffic on the Internet
20
as a result of transmitting the responses in the form of a number of packets that must be reassembled at the client device
12
.
Ideally, the cache at the proxy server
16
retains all of the cached objects. However, the typical storage capacity of the proxy server
16
is in the range of 256 megabytes to 1 terabyte, with most Web proxy capacity being at the lower half of the range. Therefore, it is important to form a replacement strategy or policy for determining which objects are to be evicted or replaced from the cache of the proxy server
16
when a recently received object is to be cached within exhausted storage space. Two important metrics that are used to measure proxy cache performance are cache hit rate and byte hit rate. The cache hit rate is the percentage of all user requests that are satisfied by the proxy server
16
, rather than by access to the original server
18
. The byte hit rate is the percentage of all network traffic, measured in bytes, transferred directly from the proxy server
16
, instead of across the external network.
There are a number of cache replacement strategies or policies that have been proposed for Web proxy caching. However, the cache in the prior art proxy server typically uses only one replacement policy to manage the content of the cache. As is known, each replacement policy has its advantages and disadvantages, and is suitable for one kind of objects. This means that each of the replacement policies improves the performance of the cache in the proxy server for a chosen metric (e.g., hit rate, byte hit rate, or latency). For example, a replacement policy of evicting the largest document (rather than a number of smaller documents) may improve cache hit rate, but does not improve the byte hit rate. As a further example, a replacement policy of evicting the smallest document may increase the byte hit rate, but does not improve the cache hit rate.
SUMMARY OF THE INVENTION
One feature of the present invention is to increase the performance of a cache system.
Another feature of the present invention is to increase the performance of a cache system by allowing multiple replacement policies to be applied to the cache system.
A further feature of the present invention is to increase the performance of a cache system by partitioning the cache system into a number of storage areas, each for storing one kind of objects, such that multiple cache replacement policies, each optimized for one kind of objects, can be applied to the cache system.
A cache system includes storage that is partitioned into a plurality of storage areas, each for storing one kind of objects received from remote sites and to be directed to target devices. The cache system further includes a cache manager coupled to the storage to cause objects to be stored in the corresponding areas of the storage. The cache manager causes cached objects in each of the storage areas to be replaced in accordance with one of a plurality of replacement policies, each being optimized for one kind of objects.
A cache system includes storage that either includes a first and a second storage device coupled together, or is partitioned into the first and second storage devices. Each of the storage devices stores objects received from the remote sites and to be directed to the target devices. The cache system also includes a cache manager coupled to the first storage device to send received objects to the first storage device. The cache manager causes cached objects in the first storage device to be replaced in accordance with a first replacement policy. The replaced objects are then sent to the second storage device. The cache manager causes objects stored in the second storage device to be replaced in accordance with a second replacement policy different from the first replacement policy.
Other features and advantages of the present invention will become apparent from the following detailed description, taken in conjunction with the accompanying drawings, illustrating by way of example the principles of the invention.
REFERENCES:
patent: 4458310 (1984-07-01), Chang
patent: 5623627 (1997-04-01), Witt
patent: 5787471 (1998
Arlitt Martin F.
Friedrich Richard J.
Jin Tai Y.
Hewlett--Packard Company
Li Thomas X.
Nguyen Hiep T.
LandOfFree
Web cache performance by applying different replacement... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Web cache performance by applying different replacement..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Web cache performance by applying different replacement... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2525095