Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories
Reexamination Certificate
1997-11-24
2001-03-27
Bragdon, Reginald G. (Department: 2186)
Electrical computers and digital processing systems: memory
Storage accessing and control
Hierarchical memories
C711S133000, C711S154000, C711S159000
Reexamination Certificate
active
06209062
ABSTRACT:
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present application relates to database management systems that implement buffer memory management processes to reduce main memory transaction times. More particularly the present application provides a cache memory management process that predicts pages in the memory that may be accessed in the near future and maintains the pages in the cache.
2. Description of the Related Art
Many database management systems use high speed buffer memory, known as buffer cache, to increase the speed of database transactions between a central processing unit (CPU) and disk storage and increase the overall speed of the data processing system. Typically, data is transferred from disk to main memory in pages or blocks of data. The data transferred typically includes data requested by the CPU and additional data, sometimes known as prefetched data, which is determined to be data that is most likely to be requested by the CPU soon.
Usually the total size of the database on disk is significantly larger than amount of memory available in the CPU system so that the cache eventually becomes full. As a result, the database management system has to decide which pages in the cache are to be removed and which are maintained.
Due to the temporal locality of database cache accesses, most pages that were accessed in the recent past are very likely to be accessed again the near future. Thus, conventional database management systems use a least recently used (LRU) memory management process, where the page least recently accessed is removed from the cache.
However, not all database cache pages have the same access characteristics. For example, some pages are frequently accessed over a long period of time, while other pages are accessed a number of times within a short period after the first access and then are not accessed again for a long time.
In multiple user environments more than one transaction may access some or all of the same pages in the cache. Thus, if the LRU memory management process is used in multiple user environments, pages may be replaced in the cache after one transaction has accessed the pages but before another transaction accesses some or all of the pages. In this instance, the CPU would then have to retrieve the pages from main memory again, thus increasing memory transaction times. For optimum performance of the cache, the page whose next access time is the farthest away in the future is the page that should be replaced.
SUMMARY OF THE INVENTION
The present application provides a memory management process that identifies a transaction that first accesses at least one page in cache memory, determines if a second transaction accesses the at least one page in the cache within a cache storage time, and maintains the at least one page in the cache for an extended period of time if the second transaction accesses the at least one page within the cache storage time. The cache storage time is the time a particular page would normally reside in the cache memory before it is replaced. The cache storage time may vary from page to page.
In an alternative embodiment, a method for managing cache memory allocation is provided. In this embodiment, a determination of which pages in the cache memory are most likely to be accessed again in the near future is made, and the pages determined to be accessed again in the near future are maintained in the cache for an extended period of time. Preferably, the determination of which pages are most likely to be accessed again in the near future is made by initially identifying a transaction that first accesses pages in the cache memory, and determining if a second transaction accesses the same page or some of the pages in the cache memory accessed previously during the period the page would normally reside in the cache before being replaced.
The present application also provides a memory management system that combines a least recently used memory management process with a page recycling technique to determine which pages in the cache are replaced. As noted above, the LRU memory management process replaces the least recently used page in the cache with a new page. However, pages in the cache that are assigned a recycle value for page recycling are maintained in the cache while those with no recycle value are first replaced.
REFERENCES:
patent: 4943908 (1990-07-01), Emma et al.
patent: 5442571 (1995-08-01), Sites
patent: 5493667 (1996-02-01), Huck et al.
patent: 5539893 (1996-07-01), Thompson et al.
patent: 5546559 (1996-08-01), Kyushima et al.
patent: 5611071 (1997-03-01), Martinez, Jr.
patent: 5644751 (1997-07-01), Burnett
patent: 5754820 (1998-05-01), Yamagami
patent: 5941980 (1999-08-01), Shang et al.
patent: 5948100 (1999-09-01), Hsu et al.
patent: 6044478 (2000-03-01), Green
patent: 6065099 (2000-05-01), Clark et al.
Boland Vernon K.
Waters John H.
Blakely , Sokoloff, Taylor & Zafman LLP
Bragdon Reginald G.
Chace Christian P.
Intel Corporation
LandOfFree
Method for holding cache pages that are not invalidated... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Method for holding cache pages that are not invalidated..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method for holding cache pages that are not invalidated... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2528481