Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories
Reexamination Certificate
1998-08-19
2001-08-28
Yoo, Do Hyun (Department: 2185)
Electrical computers and digital processing systems: memory
Storage accessing and control
Hierarchical memories
C711S134000, C711S136000, C711S135000, C709S217000, C709S203000
Reexamination Certificate
active
06282616
ABSTRACT:
BACKGROUND OF THE INVENTION
The present invention relates to a computer network and more particularly to a mechanism for managing cache contents which have heretofore been executed by a Least Recent Used (LRU) system.
A data transfer amount has heretofore been reduced by memorizing copies (hereinafter referred to as caches) of transfer data in each place of a network so as to execute the retrieval processes following the second retrieval process by using caches in proportion to an increase in the data transfer amount over a network. This cache content management has heretofore used the LRU algorithm.
When the cache content is managed by the LRU algorithm, data that cannot be stored fully in a high-speed/small capacity storage is stored in a low-speed/large capacity storage such as a disk. Further, data that cannot be stored in such a low-speed/large capacity storage, longest non-used data is discarded. That is, the latest accessed data is stored in the high-speed storage, then new data is stored in the low-speed storage, and data of longest period in which it is not in use is discarded.
SUMMARY OF THE INVENTION
According to the cache content management based on the above-mentioned LRU, it is known that the second and following accessed data is relatively satisfactorily stored in the storage. However, when the LRU algorithm management method is used as the cache control of the network data transfer based on World Wide Web (WWW), a disk apparatus of large capacity becomes necessary, and there is a problem that a speed of a disk apparatus becomes a bottleneck of the system. That is, only by the LRU algorithm, there are required many operations for transferring data from a memory serving as a high-speed storage to a disk serving as a low-speed storage. Thus, the speed of the disk apparatus becomes a bottleneck of the system performance.
An object of the present invention is to provide a cache technology which can remove a bottleneck of the speed of the disk apparatus and which can transfer network data of large capacity at a high speed.
The above-mentioned object may be attained by memorizing only data of high access frequency and by discarding data of low access frequency when statistics information of data transfer on the network is obtained and transfer data is memorized in the low-speed storage such as a disk. That is, data on the network is classified into data which is accessed at a high frequency and data which is accessed at a low frequency from a statistics standpoint. Therefore, when data is memorized in the high-speed storage under the management of the LRU algorithm, the frequency at which such data is accessed is measured. When this data is transferred to the low-speed storage under the management of the LRU algorithm, data having the high access frequency is transferred to the low-speed storage as is conventional but data having the low access frequency is not transferred to the low-speed storage and discarded. Specifically, in response to the access frequency obtained when data is initially accessed and memorized in the high-speed storage, it is determined whether or not data should be transferred to the low-speed storage under the management of the LRU algorithm.
REFERENCES:
patent: 5325505 (1994-06-01), Hoffecker
patent: 5829023 (1998-10-01), Bishop
patent: 5884298 (1999-03-01), Smith, II et al.
patent: 5893139 (1999-04-01), Kamiyama
patent: 5933853 (1999-08-01), Takagi
patent: 5961602 (1999-10-01), Thompson
patent: 5974509 (1999-10-01), Berliner
patent: 6012126 (2000-01-01), Aggarwal et al.
patent: 6085234 (1998-07-01), Pitts et al.
Hosokawa Takafumi
Mori Yasuhide
Nishikawa Norifumi
Tsuji Hiroshi
Yoshida Ken-ichi
Antonelli Terry Stout & Kraus LLP
Hitachi , Ltd.
McLean Kimberly
Yoo Do Hyun
LandOfFree
Caching managing method for network and terminal for data... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Caching managing method for network and terminal for data..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Caching managing method for network and terminal for data... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2490641