Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories
Reexamination Certificate
1999-06-22
2003-04-01
Yoo, Do Hyun (Department: 2187)
Electrical computers and digital processing systems: memory
Storage accessing and control
Hierarchical memories
C711S135000, C711S113000, C711S004000
Reexamination Certificate
active
06542967
ABSTRACT:
BACKGROUND OF THE INVENTION
Users having computers interconnected by an institutional intranet or local area network may access various remote sites (such as those on the “World-Wide Web”) via the well-known Internet communications network. Using resident web browser applications executing on the computers, these “clients” may navigate among data (“pages”) stored on various servers (“web sites”) and may further view the contents of these pages as desired. In a basic network communication arrangement, clients are free to access any remote web site for which uniform resource locator (URL) addresses are available. It is increasingly common in network applications to provide each client with access to a so-called proxy server that links to the Internet. A proxy server accesses requested data from the web sites and stores it locally (i.e., “caches” the data) to effectively speed-up client access and reduce the download time of future requests for the data. In response to a request from a browser executing on a client, the proxy server attempts to fulfill that request from its local cache storage; if it cannot, the proxy server forwards the request over the Internet to a server that can satisfy the request. The server then responds by transferring a stream of data to the proxy server, which caches and forwards the data onto the client.
Caches have become increasingly important in the context of proxy servers as the amount of data consumed over networks increases. A cache system typically stores a subset of an entire data set in its store and that data is constantly rotated in and out of the cache in accordance with an algorithm that identifies the data to be replaced, such as a conventional least recently used algorithm.
The cache system is not the primary source of the data set and, therefore, it can retrieve any data that has been deleted or lost from a source that “publishes” the data. Despite increasing network bandwidth, it is desirable to cache data closer to the consumer is of that data, especially as local client access speeds and content density increase. In this context, closeness is defined in terms of bandwidth or accessability to the data so as to enhance a user's experience. The typical Internet model wherein the publisher of the data is provided with substantial content capacity, i.e., the ability to service (or deliver) content as requested, is fundamentally non-scalable. Wide spread use of caching technology increases scalability and decreases content access requests at the content origin site.
Cache systems generally rely on the ability to organize access requests in a fast storage mechanism, such as memory composed of random access memory devices. If the cache of a proxy server is servicing a busy communications channel, it will eventually exhaust the memory. At this point, the system may (in accordance with the conventional replacement algorithm) either discard portions of the cached data or move those portions from memory to another storage mechanism, such as a disk. Although this latter option increases the persistency of the cached data and extends the amount of cache memory, it also introduces a relatively slow storage mechanism into the cache system.
Two common paradigms for the persistent storage of data are file systems and database systems. A file system contains general knowledge of the organization of the data stored on storage devices, such as memories and disks, needed to implement properties/performance of a desired storage architecture. A database system provides as structured data model that may be implemented on a file system or other storage architecture. Notably, there is an expectancy that the data (i.e., “content”) stored on the file system or database will be preserved until explicitly removed. Persistency with respect to the storage of content, e.g., naming of data files and their non-volative storage, is paramount to other properties/performance metrics such as organization of, and speed of access to, the to stored content. As such, these characteristics of a file system or database are not generally suited to the access and volatility characteristics of a cache system.
Conventional file systems have evolved to take advantage of the higher disk densities but have not generally overcome limitations of the number of disk operations per second. Disk density/capacity generally increases on a price/performance curve similar to that of semiconductor technologies by, e.g., making disk tracks thinner. However, disk access times are not decreasing at the same rate due primarily to physical constraints; indeed, the number of disk operations per second is increasing only minimally due to rotational latencies and head-throw seek times.
Therefore, a feature of the present invention is to provide a cache system that efficiently retrieves and stores data transferred over a computer network.
Another feature of the present invention is to provide a cache system having features of a persistent store and a non-persistent store.
Yet another feature of the invention is to provide a cache system that includes volatile and non-volatile (e.g., disk) storage capabilities.
Yet another feature of the invention is to provide multiple memory abstractions that allow exploitation of the characteristics of a cache environment.
Still yet another feature of the present invention is to provide a cache system that includes a mechanism for reducing the number of disk operations needed to store data and that advantageously utilizes disk density.
SUMMARY OF THE INVENTION
The invention comprises a cache object store organized to provide fast and efficient storage of data as cache objects, which can be organized into cache object groups. The cache object store preferably embodies a multi-level hierarchical storage architecture comprising (i) a primary memory-level (RAM) cache store and (ii) a secondary disk-level cache store, each of which is configured to optimize access to the cache object-groups. These levels of the cache object store further cooperate to provide an enhanced caching system that exploits persistent and non-persistent storage characteristics of the inventive architecture.
In the illustrative embodiment of the invention, the memory-level and disk-level stores are optimized as fast cache components by exploiting the characteristics/attributes of memory and disk storage devices constituting these stores. For example, the memory devices are configured to be efficiently accessed on “natural” boundaries to conform with address mapping arrangements, whereas the disks are optimized for such attributes as geometry, head movement and sector interleaving. If another tertiary-level cache is used in the hierarchical architecture, those storage devices would be similarly characterized and advantageously employed.
A cache object manager implements various aging and storage management algorithms to manage the cache object store. An example of such an aging policy is a modified least recently used (LRU) algorithm that strives to keep those object groups that are accessed most often in the primary-level cache store, with as many remaining object groups stored on the secondary-level store for quick retrieval. According to the cache object manager policy, each object group is marked with a time of last access that indicates the frequency at which object group is accessed within the cache store, and a cost of reacquisition to determine which object groups to move or delete.
A cache directory manager cooperates with the cache object manager to implement the storage management policies. The secondary-level store is primarily used to locate certain object groups from memory to disk if the aging mechanism recommends relocation. Relocation of an object will result in one of three states: object in RAM only, object in RAM and disk, or object on disk only. The cache directory manager maintains lists of object groups to be moved for each object group size. The storage management policy seeks to optimize movement of cache object groups from memory to disk by, e.g., moving the disk head
Novell Inc.
Peugh B. R.
Schwegman Lundberg Woessner & Kluth P.A.
Yoo Do Hyun
LandOfFree
Cache object store does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Cache object store, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Cache object store will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3030001