Memory management of data buffers incorporating hierarchical...

Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S134000, C711S136000, C711S144000

Reexamination Certificate

active

06738866

ABSTRACT:

PRIOR FOREIGN APPLICATION
This application claims priority from Canadian patent application number 2,312,444, filed Jun. 20, 2000, which is hereby incorporated herein by reference in its entirety.
TECHNICAL FIELD
The invention concerns generally memory management and in particular a data buffer memory management method, computer program product and system for increasing the effectiveness and efficiency of buffer replacement selection.
BACKGROUND OF THE INVENTION
The invention concerns, by illustrative example, the buffer memory of a database management system or DBMS. In modern processing, a buffer memory provides a quick, relatively small-capacity interface between a processor and a generally slow, large-capacity data storage memory. Buffer memory typically comprise a set of buffers intended to store extracts from the data memory such as a cache memory for receiving blocks of data from the data memory. A DBMS may be run by a computer having a processor, buffer memory (e.g. physical memory usually extended via virtual memory techniques), I/O devices such as a keyboard and a video display terminal, and large capacity data storage. An operating system provides the environment to run the DBMS. Typically, database extracts from the slower, large-capacity data memory are stored as pages in the buffer memory for use by the processor. Memory management of the buffers in the buffer memory is accomplished via a buffer manager operated by the processor.
If there are insufficient free pages in the buffer memory to read in new data, the buffer manager employs page replacement or victim selection policies to free pages. The most commonly known replacement policy is generally referred to as Least Recently Used (LRU). This policy orders a list of the buffers in the cache memory according to the time of their last use. When pages become candidates for page replacement at the time they are unpinned, they are added to the tail of the list. When the buffer manager does not find a page in the cache memory, an event known as a cache miss, it requests the page from the database for loading into the buffer memory, while the LRU policy picks which buffer was least recently used, choosing the buffer at the head of the list in order to record the new page. Pages in the list that are referenced (i.e. cache hit) before being replaced are removed from the list and added to the tail when unpinned.
This optimization procedure suffers from the fact that it doesn't account for pages that are accessed more frequently, making them less favorable victims. The policy assumes that the nth page in the LRU page list is less likely to be reused than the nth+1 page. It may be that the nth page has a higher access rate than the nth+1 page and is therefore less favorable as a victim for replacement.
There are similarities between data base buffer management and operating system virtual memory management. U.S. Pat. No. 5,423,017 of Parikh entitled “Method of and Apparatus for Increasing Efficiency of Ager”, issued Jun. 6, 1995, discusses a novel way for operating systems to swap/page out processes from physical memory in a page based virtual memory management system. It classifies processes into four classes based on the amount of run time each has accumulated, and then chooses to swap/page out processes that have been running the least. The method and system disclosed therein handle process data, which is made of code and data segments, and does not distinguish between the clean and dirty data pages in data segments during swap/page outs. Further, processes are assigned to classes for aging based upon their most recent perceived use only as measured by counting CPU time slices.
The method disclosed in U.S. Pat. No. 5,423,017 takes into account run time information, in this case, a count of CPU time slices, as a measure of recent activity level or perceived use to predict the need to retain a processes' pages in the memory.
U.S. Pat. No. 5,870,551 of Ozden et al. entitled “Lookahead Buffer Replacement Method Using Ratio of Clients Access Order Offsets and Buffer Data Block Offsets” issued Feb. 9, 1999 introduces two manners of estimating anticipated use by measuring current use conditions in this continuous multimedia data distribution environment. In the first manner, for each data buffer in the cache, a future is determined by examining the requirements for each client that will access the buffer. The buffer with the lowest anticipated future is allocated for early replacement. In the second method, an estimate is made of the next time a buffer will be accessed by a client by analyzing the relative position or distance of the clients to the buffer. Buffers associated with a client having higher distances are allocated for release before those having lower distances.
Neither method disclosed in the respective patents employs estimates of future use that may be independent of current use conditions.
As an alternative to LRU, clock based algorithms provide a variable to point to a page considered for replacement. To approximate LRU behavior and to handle hot pages, each page has an associated weight count, which is set on page unpin, and decremented whenever the clock is pointing to the page. If the weight becomes zero, the page is chosen for replacement, otherwise, the clock is made to point to the next page to be evaluated. Access to the clock is serialized.
What is needed in the art is a more effective and efficient memory management using page buffers.
SUMMARY OF THE INVENTION
It is an object of the invention to provide a memory management and in particular a data buffer memory management method, computer program product and system for increasing the effectiveness and efficiency of buffer replacement selection.
The invention provides, in a first aspect, a method for managing memory using page buffers. The method comprises a step for determining, for each buffer, a measure of the favorability for victimizing the buffer; a step for assigning each buffer in an order to one of a plurality of levels, said one level selected according to the measure of the favorability for victimizing the buffer, the plurality of levels denoting a buffer hierarchy for prioritizing the victimization of buffers; and a step for victimizing the buffers based on the buffer hierarchy and the order of the buffers in the levels. The method may further place the buffers in cold, warm or hot levels. The method may further comprise a step for determining if the buffer has previously been assigned to a level and, if so, a step for determining a further measure of favorability for victimization for the buffer previously assigned. In such a case, the step for assigning is based additionally on the further measure of favorability for victimization.
Additionally, the method may further comprise a step for determining a buffer content characteristic for each buffer, typically a clean/dirty status of the buffer; the step for assigning is based additionally on the buffer content characteristic and, typically, the buffer hierarchy distinguishes clean buffers in the plurality of levels from dirty buffers in the plurality of levels. Further a step for transforming the dirty buffers in the buffer hierarchy to clean buffers in the buffer hierarchy may be present.
A step for aging the buffers in the levels to identify buffers as eligible for victimization according to the buffer hierarchy and the order of the buffers in the levels may also be provided.
According to the invention, the step for victimizing comprises a step for determining a preferred level from the plurality of levels from which to release a buffer according to a preferred victimization scheme; a step for locating the buffer for releasing upon examination of the preferred level; and a step for releasing the buffer. If the preferred level has no buffer for releasing, there is provided a step for examining one or more additional levels according to the buffer hierarchy and the order of the buffers in the levels to locate the buffer for releasing.
The measure of the favorability for

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Memory management of data buffers incorporating hierarchical... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Memory management of data buffers incorporating hierarchical..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Memory management of data buffers incorporating hierarchical... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3195504

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.