Mechanism for managing an object cache

Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S135000, C711S136000, C711S159000, C711S160000, C707S793000

Reexamination Certificate

active

06591346

ABSTRACT:

FIELD OF THE INVENTION
This invention relates generally to computer software, and more particularly to a mechanism for managing an object cache.
BACKGROUND OF THE INVENTION
The use of hardware caches to provide a fast storage buffer in the central processing unit of a computer to reduce the penalty of access to frequently requested resources is widespread and well understood. Likewise, in software, there is a need to retrieve information, in particular objects, in a timely and efficient manner. An object is a discrete item that can be selected and manipulated, often a self-contained item that contains data and the procedures necessary to manipulate that data. Object caches are used to store a set of frequently accessed objects so that they may be made available faster than if the objects were stored on disc or generated upon request through computation.
An object cache manager receives and services requests for objects from software applications. An object cache manager makes a determination on whether the requested object is stored within the object cache. If so, the object is retrieved from the object cache. If not, the requested object is created or retrieved from disc. In either case, the requested object is then delivered to the requesting party.
Object caches are typically implemented through the use of a doubly linked list. A linked list is a data structure in which each object contains a pointer to the next object, thus forming a linear list. A doubly linked list contains pointers to both the next and previous objects. As shown in
FIG. 1
, the first object in the list is called the “Head”, and the last object in the list is called the “Tail.” One can appreciate the “Head” and “Tail” of a doubly linked list are relative terms, but for ease of discussion it will be assumed that the “Head” is the portion of the doubly linked list from which objects are removed, and the “Tail” is the portion of the doubly linked list to which objects are added.
An object cache typically can only accommodate a bounded number of objects. Items are added to the doubly linked list until the limit of the number of objects the object cache may accommodate is reached.
When a new object is to be inserted into an object cache that is already at maximum capacity, an object must be deleted from the object cache before the new object can be added. Typically, the aim is to remove the least needed object in the object cache to maximize the benefit of storing objects in the object cache. It is often assumed that the least needed object is the least recently used object. For this reason, the doubly linked list used to implement the object cache is typically referred to as the “Least Recently Used List” or the LRU list.
FIGS. 1A
,
1
B, and
2
will now be referenced to illustrate the operation of referencing an object, and specifically the overhead involved, in the prior art. Assume
FIGS. 1A
,
1
B and
2
show an object cache
100
that at most can accommodate
4
objects, and further assume that objects A, B, C, and D were inserted into the object cache in that order.
FIG. 1A
illustrates the state of the object cache immediately after inserting objects A, B, C, and D. Notice that the most recently added object “D”, is near the Tail, and the least recently added object “A” is near the Head.
Upon receiving a request for a particular object in the object cache, the object cache manager removes a requested object off the LRU list. When the requesting party no longer needs that object, the object cache manager adds the object back to the LRU list. For example, assume that the object cache manager for the object cache shown in
FIG. 1A
receives a request for object C. In response to this request, object C is removed from the LRU list to indicate that it is currently in use, as shown in FIG.
1
B. Then, when the object C is no longer in use, or is considered “freed”, it is inserted back into the LRU list at the tail, as shown in FIG.
2
. This process of removing the object from and inserting the object back into the LRU list involves the updating of five objects.
This is shown in greater detail in FIG.
3
. More specifically, when object C is removed from the LRU list, object B is updated (
302
) to cause its downward pointer (
FIG. 1B
) to point to object D rather than object C. Similarly, object D is updated (
304
) to cause its upward pointer to point to object B rather than object C. When object C is added back to the LRU list, three objects are updated. Namely, object C is updated (
306
) to cause its upward pointer (
FIG. 2
) to point to object D, and its downward pointer to point to the tail. Object D is updated (
308
) to cause its downward pointer to point to object C rather than the tail. Finally, the tail is updated (
310
) to cause its upward pointer to point to object C rather than object D. As this discussion shows, whenever there is a “hit” in the object cache
100
, five objects in the object cache
100
are updated.
It can be appreciated that the extra overhead involved in updating five objects every time an object is referenced in an object cache can be significant. In fact, it can become so significant that it degrades system performance. Since performance degradation is clearly an undesirable result, it is desirable to minimize this overhead as much as possible. Currently, however, there is no effective mechanism for doing so.
SUMMARY OF THE INVENTION
To overcome the shortcomings of the prior art, the present invention provides an improved mechanism for managing an object cache. According to one embodiment, when an object cache manager receives a request for an object resident in an object cache, a determination is made as to whether the requested object is currently within a particular portion of the object cache. In one embodiment, this particular portion is a portion containing some of the most recently used objects. If the requested object is within this particular portion, then the object cache manager keeps the requested object within this portion of the cache. In one embodiment, this is done by maintaining the requested object at its current position relative to other objects in the object cache. In other words, the requested object is not moved.
By not moving the requested object, the present invention eliminates the need to update a plurality of objects each time a cache “hit” occurs. This is turn significantly reduces the overhead incurred for objects that are repeatedly “hit”. By removing unnecessary overhead, the present invention significantly increases the efficiency of the object cache. Hence, the present invention represents a significant improvement over the prior art.


REFERENCES:
patent: 5930807 (1999-07-01), Ebrahim et al.
patent: 6012126 (2000-01-01), Aggarwal et al.
patent: 6061763 (2000-05-01), Rubin et al.
patent: 6378053 (2002-04-01), Lamaire
patent: 6385699 (2002-05-01), Bozman et al.
patent: 6438575 (2002-08-01), Khan et al.
patent: 6442329 (2002-08-01), Gough

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Mechanism for managing an object cache does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Mechanism for managing an object cache, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Mechanism for managing an object cache will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3065135

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.