Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories
Reexamination Certificate
2002-07-09
2004-12-21
Kim, Matthew (Department: 2186)
Electrical computers and digital processing systems: memory
Storage accessing and control
Hierarchical memories
C711S136000, C711S158000
Reexamination Certificate
active
06834329
ABSTRACT:
FIELD OF THE INVENTION
The present invention relates to a cache control technique, and more particularly to a technique for selecting data to purge so as to improve cache hit rates.
BACKGROUND OF THE INVENTION
Cache techniques have been increasingly utilized in many scenes as the hierarchization of memories progresses. Besides, with a recent increase in information processing power, a cache miss has come to be one of reasons for deterioration in performance. Thus, the cache techniques have had a great impact on the performance of network systems.
As an outstanding example, there is a proxy cache apparatus which is set in a network such as the Internet and caches data transferred via the network. A cache hit at the proxy cache apparatus shortens a data transfer route, and accordingly, improves data transfer speed. Thus the proxy cache apparatus reduces response time for transferring data.
FIG. 1
is a block diagram showing an example of a network system employing the proxy cache apparatus. As can be seen in
FIG. 1
, a proxy cache apparatus
4
acts as an intermediary between one or more servers
1
-
1
to
1
-
m
and one or more clients
2
-
1
to
2
-
n
via a network (Internet)
3
. The proxy cache apparatus
4
receives a request from a client
2
-
j
(1≦j≦n) by proxy for a server
1
-
i
(1≦i≦m), and sends the request to the server
1
-
i
on behalf of the client
2
-
j
. Having received data from the server
1
-
i
, the proxy cache apparatus
4
transfers the data to the client
2
-
j
. On this occasion, the proxy cache apparatus
4
caches the data. Consequently, when the proxy cache apparatus
4
next receives a request for the same data from a client
2
-
k
(1≦k≦n), the data stored in the memory
4
is sent to the client
2
-
k.
As for caching policies employed by cache apparatuses like the proxy cache apparatus
4
, there have been proposed a number of caching policies including LRU. One of those policies is described in detail in the technical report: Ludmila Cherkasova, “Improving WWW Proxies Performance with Greedy-Dual-Size-Frequency Caching Policy,” Computer Systems Laboratory HPL-98-69 (R.1), November 1998 (hereinafter referred to as reference
1
).
Although the theoretically optimum caching policy is to give the lowest priority to data that will be accessed in the most distant future, it cannot be implemented unless all future data accesses are known. Therefore, caching algorithms such as LRU (Least Recently Used) etc. are just approximations of the theoretically optimum algorithm.
Another caching policy is proposed in the paper: E. J. O'Neil, P. E. O'Neil, and G. Weikum, “The LRU-K Page Replacement Algorithm for Database Disk Buffering,” in Proceedings of the 1993 ACM SIGMOD International Conference on Management of Data, pp. 297-306, 1993 (hereinafter referred to as reference
2
). In the LRU-K method, times of the last K references (K: a positive integer) to respective data items are tracked, and the data item whose Kth most recent reference was made least recently is purged. In the case where K=2, namely, in the case of LRU-
2
, respective data items are given different priorities depending on whether a reference to the data has been made two or more times. Among the data items to which two or more references have been made, the data item whose second last reference was made least recently are given the lowest priority. Naturally, the data to which only one reference has been made is given lower priority than the data which have been referred to two or more times. When compared with LRU that employs only the last access time as information, the LRU-K algorithm uses access times of last K references as information, and thus the LRU-K is a caching algorithm based on more information than LRU.
In another caching policy called LRFU (Least Recently/Frequently Used) policy, which is described in the paper: D. Lee, J. Choi, J. H. Kim, S. H. Noh, S. L. Min, Y. Cho, and C. S. Kim, “On the Existence of a Spectrum of Policies that Subsumes the Least Recently Used (LRU) and Least Frequently Used (LFU) Policies,” in Proceedings of the 1999 ACM SIGMETRICS Conference on Measurement and Modeling of Computer Systems, pp. 134-143, 1999 (hereinafter referred to as reference
3
), respective data items are given an ordering of priority based on the CRF (combined Recency and Frequency) value. The CRF value C(t) at time t is determined by a weighing function F(x). Assuming that, for example, current time is 8, and that data was accessed at times 1, 2, 5 and 8, the CRF value C(t) is calculated as follows:
C
(
t
)=
F
(8-1)+
F
(8-2)+
F
(8-5)+
F
(8-8)=
F
(7)+
F
(6)+
F
(3)+
F
(0).
When many references have been made to data, the computation of priority becomes heavy operations and the volume of information to stock increases. However, if the weighing function F(x) has the F(x+y)=F(x)F(y) properties, then C(t) is derived as follows:
C
(
t
)=
F
(8-1)+
F
(8-2)+
F
(8-5)+
F
(8-8)=
F
(3+5-1)+
F
(3+5-2)+
F
(3+5-5)+
F
(3+5-8)=
F
(0)+
F
(3)
C
(5).
Thus, the CRF value can be easily computed from the time of the past reference and the CRF value at that time. Reference
3
indicates that the LRFU policy achieves higher hit rates than the LRU-
2
policy does.
There has been proposed yet another caching policy called Early Eviction LRU (EELRU) in the paper: Y. Smaragdakis, S. Kaplan, and P. Wilson, “EELRU: Simple and Effective Adaptive Page Replacement,” in Proceedings of the 1999 ACM SIGMETRICS Conference on Measurement and Modeling of Computer Systems, pp. 122-133, 1999 (hereinafter referred to as reference
4
). This caching algorithm uses the time of the last reference as information as with LRU. EELRU performs LRU replacement and purges a data item that was least recently accessed unless many data items accessed recently had just been purged. If many data items accessed recently have been purged, the e-th most recently accessed data item is purged from the cache. Additionally, e is dynamically adjusted so as to advance hit rates.
Every algorithm LRU-K, LRFU or EELRU achieves higher hit rates as compared to well-known caching algorithms such as LRU, however, these do not work well for proxy caches. In the caching policy applied to the proxy cache, in many cases, caching and purging of data are performed with respect to each data item requested by the clients. Consequently, when a client requests a large data item and the data item is cached, a number of small ones may be evicted to make space available for storing the large data item. In other words, the usefulness of data depends on the size of the data in addition to access patterns. The GDSF policy (Greedy-Dual-Size-Frequency Caching Policy) described in reference
1
is a caching policy that takes into account the data size. GDSF gives reduced priority to large data and thereby raising hit rates. However, when data of multimedia objects, which is large and read sequentially, is stored in the cache, the data (continuous media data) is given low priority. Therefore, the GDSF policy is not adequate to deal with the continuous media data.
A cache memory section is often composed of a plurality of storage media each having different processing speed and different capacity. A general cache memory section includes a high-speed and low-capacity primary memory and a low-speed and high-capacity secondary memory. When the proxy cache apparatus provided with such memory section deals with large data like continuous media data etc., the data is almost always stored in the secondary memory because the size of the data is too large for the capacity of the primary memory. This means that if traffic concentrates on the continuous media data, the speed at which the data is read out of the low-speed secondary memory creates bottlenecks in the processing. Moreover, since the very large continuous media data is exclusively transferred from the low-speed secondary memory, other data cannot be t
Sasaki Shigero
Tanaka Atsuhiro
Tatsukawa Kosuke
Kim Matthew
NEC Corporation
Thomas Shane M.
William, Curtis & Christofferson P.C.
LandOfFree
Cache control method and cache apparatus does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Cache control method and cache apparatus, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Cache control method and cache apparatus will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3336553