Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories
Reexamination Certificate
2000-05-30
2004-10-19
Anderson, Matthew D. (Department: 2186)
Electrical computers and digital processing systems: memory
Storage accessing and control
Hierarchical memories
Reexamination Certificate
active
06807607
ABSTRACT:
FIELD OF THE INVENTION
The present invention relates cache memory management and more particularly to a simple technique for deciding whether or not to remove an object from cache storage.
BACKGROUND OF THE INVENTION
Cache memories are relatively small buffer memories used in computer systems to provide temporary storage for data retrieved from larger, slower main memory devices, such as hard disk drives. Cache memories, if properly managed, can significantly improve computer system performance.
Cache memories are employed in many computer workstations. A central processing unit in a workstation needs less time to obtain data from a cache memory than it does to obtain the same data by accessing the main memory. If a reasonable percentage of the data needed by a central processing unit is maintained in cache memory, the amount of processor time wasted waiting for data to be retrieved from main memory is significantly reduced, improving the computer system performance.
Cache memories are also employed in network environments, exemplified by the Internet or World Wide Web. In a Web environment, a user (interacting through a personal computer) communicates with a Web host through a proxy server. One of the functions that a proxy server performs is caching of copies of data previously requested by the user. When the user submits a request for data, a proxy server intercepts that request and determines whether it has already cached a copy of the requested data. If a cached copy exists, that copy is returned to the user by the proxy server without ever forwarding request to the Web host. If a cached copy does not exist in the proxy server, the server then forwards the user's request to the Web host. When the Web host returns the requested data, the proxy server attempts to cache a copy of the data before passing it on to the user.
If the requested data unit or object is found in the cache, this is called a “hit”. If the object cannot be found in the cache, this is called a “miss” necessitating a “fetch” operation to retrieve the memory from the Web host.
In steady-state system operation, the cache becomes heavily loaded. A cache miss requires a fetch operation and also implies that some of the data already in the cache must be removed to make room for the data being fetched from main memory. Cache replacement techniques have been studied extensively. Use-based replacement techniques take into account the record of use for every cached object when making replacement decisions. Examples of this type of replacement technique are the “Least Recently Used” (LRU) approach or the “working set” approach. Other approaches make removal/replacement decisions on the basis of factors unrelated to usage. A “first-in-first-out” or FIFO approach and a “random removal” approach are examples of cache replacement techniques which do not take into account object usage.
Cache replacement techniques can place heavy demands on system resources. One known class of techniques requires that a number of cached objects be “ranked” relative to one another to determine which object would be the “best” one to remove. To make sure that cache replacement decisions can be made quickly enough, systems practicing “ranking” techniques are typically implemented at least in part in hardware. Hardware implementations are costlier and less flexible than software implementations.
A similar situation is found for cache systems used in proxy servers. Once its cache is full, the proxy server must remove cached objects, also called documents, in order to free up space for newly received documents. In a network environment, cache replacement operations are sometimes referred to as garbage collection.
A proxy server is typically capable of storing large numbers of documents. Because of this, it is not always practical to use “ranking” techniques in making cache replacement decisions. For this reason, garbage collection is sometimes performed at a certain time of day or is triggered only when the cache size exceeds a given limit.
Where large numbers of documents are stored, a proxy server may select a subset of the documents and assign a weight to each document in the subset. The weight may be based on an estimation of the probability that the particular document will be needed in the future. Once all the documents in the subset are assigned a weight, the documents are ranked relative to one another. A predetermined percentage of the documents is removed from the subset. One problem with this approach is trying to choose the size of the subset. If the subset is too small, it may not be statistically representative of the entire set. Removing a predetermined percentage of the documents in the subset may result in the removal of documents that ranked low in the subset but would have ranked relatively higher if the complete set of documents had been ranked. Making the subset larger is not complete solution since larger subsets increase computational demands on the system.
SUMMARY OF THE INVENTION
The present invention is a cache replacement technique that minimizes demands on system resources. When a cache replacement decision is to be made, an object is selected from the cache and assigned a weight in accordance with a predetermined methodology. The assigned object weight is compared to an existing threshold. If the assigned object weight is lower than the threshold, the object is marked for removal and the threshold value is reduced. If the assigned object weight is higher than the threshold, the object remains in the cache and the threshold level is raised. A reduction in the threshold level decreases the chances that the next object considered will be selected for replacement. An increase in the threshold level increases the chances that the next object considered will be selected for replacement.
REFERENCES:
patent: 5546559 (1996-08-01), Kyushima et al.
patent: 5608890 (1997-03-01), Berger et al.
patent: 5787473 (1998-07-01), Vishlitzky et al.
patent: 5892937 (1999-04-01), Caccavale
patent: 6219760 (2001-04-01), McMinn
Anderson Matthew D.
Woods Gerald R.
LandOfFree
Cache memory management system and method does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Cache memory management system and method, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Cache memory management system and method will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3279168