Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories
Reexamination Certificate
2000-10-26
2003-10-28
Sparks, Donald (Department: 2181)
Electrical computers and digital processing systems: memory
Storage accessing and control
Hierarchical memories
C711S141000, C711S154000, C711S159000
Reexamination Certificate
active
06640285
ABSTRACT:
FIELD OF THE INVENTION
This invention relates to data processing and storage systems and more particularly to cache memory storage used in such systems.
BACKGROUND OF THE INVENTION
In virtually every system involving the processing or communication of information, blocks of data must be stored in, retrieved from, or transferred between, storage systems. During the movement of such data, it is common practice to temporarily store at least some of the data blocks in small memories called cache memories. Cache memories are generally used to reduce the time required to access the data.
For example, the speed of information retrieval from a storage system differs radically between systems depending on the type and construction of the storage system. There is often a tradeoff between retrieval speed and storage system cost. For example, rotating disk storage systems are very cost effective, but suffer from long retrieval times or latency because gross physical motion of the rotating disk or read head is often required to retrieve a particular block of data. On the other hand, semiconductor storage. systems, such as random access memories, do not require physical motion to retrieve data and thus often have very fast retrieval times. However, these memories are generally much more expensive than disk-based systems. Therefore, in order to reduce data access time and still decrease the storage costs, a small semiconductor memory is often used to cache data retrieved from a disk storage system.
The cache memory conventionally has a much smaller capacity than the disk storage system and stores the most frequently requested data. If a data block that is present in the cache memory is requested, then a cache “hit” results and the data can be retrieved directly from the cache memory with much lower retrieval time. If a data block that is not in the cache memory is requested, then a cache “miss” results and the data must be retrieved from the underlying storage system with a higher retrieval time. In case of a cache miss, the requested data may be stored in the cache memory so that a subsequent request for the same data will result in a cache “hit.” Special algorithms are used to decide which data blocks to keep in the cache memory, which blocks to discard and which blocks to store in the associated storage system (called paging the blocks out.) These same issues apply to a greater or lesser degree in any context in which blocks of data from one level in the storage media hierarchy are cached at another level. In this more general context, the data blocks that are moved into, or out of, the cache memory are often referred to as cache “lines.”
The efficiency of such a cache memory is obviously highly dependent on the methods used to determine what cache lines to store in the cache memory, how long to retain the cache lines in the cache memory and when to release the cache lines from the cache memory (by either discarding them if they haven't been modified or paging them out if they have been modified). to make room for new lines that are presumably in higher demand. Thus, management of cache memory systems often revolves around the selection and implementation of these methods.
The management of cache memories is further compounded in storage systems in which data is efficiently stored and retrieved in data blocks called “pages.” If the page size, or the amount of data that is moved back and forth between the cache and the next level of memory, consists of more than one cache line and the way in which cache lines are assembled into pages is largely unconstrained, then the cache memory efficiency will be highly dependent on how the cache lines are assembled into pages. For example, cache memory performance can be greatly enhanced by grouping cache lines that tend to be accessed in close time proximity into the same page so that these lines can be stored and retrieved at the same time.
An example of a caching environment demonstrating cache memory management problems is one involving typical file systems that use an underlying page-oriented storage system. Information that describes the attributes of the various files and directories (i.e., file system “objects”) comprising the file system and that identifies where those objects can be found in the media used to store them is usually referred to as file system “metadata.” This metadata itself must be assembled into pages and stored, generally on the same medium used to store the file system data. Typically, each object in a file system is assigned an identification number called a “handle” when it is created and that handle (or some portion of it) is used to locate the object whenever a subsequent reference is made to it. A metadata structure is then maintained that maps between object handles and physical disk locations where the corresponding object attributes are stored so that object attributes can be retrieved with an object handle.
An object's attributes generally comprise a small amount of data and, since it is inefficient to read amounts of data less than a page from the underlying storage system, the attributes of multiple objects—generally on the order of eight to sixteen objects—are combined to form a single page of the metadata structure. In a conventional file system, the attributes on a page typically correspond to attributes for objects with related object handles. Because an object handle is assigned at the time the object is created and the handle is based, for example, on the object's name or on the chronological order of its creation, the attributes on a page describe objects that tend to be uncorrelated. Therefore, when a page is retrieved using a handle to get access to an object's attributes, the other attributes on that page are not likely to be of current interest.
Nevertheless, since an object that has been accessed is likely to be accessed again within a relatively short period of time, most known file systems attempt to cache the page containing the desired object's attributes so that they do not have to be repeatedly retrieved from the underlying storage. The result is that most of the cached attributes are not of current interest and the effectiveness of the cache is much less than it would be if all cached attributes were associated with objects that were currently active.
As an alternative to caching the entire page of attributes, some file systems cache only the attributes of the object of interest. This strategy still suffers from two major disadvantages. First, an entire page still has to be fetched from the underlying storage in order to get access to the attributes associated with only one object, thereby eliminating the efficiencies obtained by reading entire pages from the underlying storage. Second, since the attributes associated with an object usually contain information that dynamically changes, such as the time the object was must recently accessed, those attributes must be paged back to the underlying storage when space in the cache is needed for more current information. Since the entire page is not cached, during the paging back process, the page must be re-read from the underlying storage so that the changed attribute can be modified and then the entire page must be rewritten to the underlying storage.
Therefore, there is need for a cache memory management system that can efficiently use cache memories with a page-oriented underlying storage system.
SUMMARY OF THE INVENTION
In accordance with the principles of the present invention, the efficiency of cache memories is improved by assigning to each cache line a measure of its relative activity and using that measure to determine which lines to retain in the cache memory and which lines to remove when room is needed for new lines.
In accordance with a preferred embodiment, if the page size exceeds the line size and the cache manager has the ability to determine how lines are assembled into pages, the cache memory efficiency is further improved by dynamically grouping lines into pages in accordance with the current levels of l
Bopardikar Raju C.
Stiffler Jack J.
EMC Corporation
Kudirka & Jobse LLP
Peugh Brian R.
Sparks Donald
LandOfFree
Method and apparatus for improving the efficiency of cache... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Method and apparatus for improving the efficiency of cache..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and apparatus for improving the efficiency of cache... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3173626