Thrashing reduction in demand accessing of a data base through a

Boots – shoes – and leggings

Patent

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

364200, G11C 906

Patent

active

044221452

DESCRIPTION:

BRIEF SUMMARY
TECHNICAL FIELD

This invention relates to a CPU implementable method for minimizing thrashing among concurrent processes demand paging a data base through an LRU page organized buffer pool.


Background Art

Virtual memory is an address map which is larger than the available physical memory and abolishes the distinction between main and secondary memory. The concept of a one-level store can be realized by a "paging" technique. In this regard, a virtual address space is divided into "pages" of equal size and the main memory is similarly divided into page frames of the same size. Page frames are shared between processes currently being executed in a CPU system so that at any time a given process will have a few pages resident in main memory and the rest resident in secondary memory. The paging mechanism has two functions. First, it executes the address mapping operation, and secondly, transfers pages from secondary memory to main memory when required and returns them when they are no longer being used. A buffer space, termed a cache, and a cache manager implement the referencing functions. For example, an address mapping operation for paging could use a small associative store by which parallel table look-up in order to locate pages contained within the buffer could be made. The unavailability of a page in the buffer is referred to as a page fault. The remedy for a page fault requires accessing of a secondary memory (a backing store or storage sub-system) in order to obtain the requested page.
The management of a cache or buffer pool requires that consideration be given to where information is to be placed in the cache, what information is to be removed so as to create unallocated regions of cache, and when information is to be loaded; for example, on demand or in advance. Contemporary cache management involves replacement of page which has least recently been used (LRU). This involves the assumption that future behavior will closely follow recent behavior. Processes operate in context. That is, in a small time interval, a process tends to operate within a particular logical module, drawing its instructions from a single procedure, and its data from a single data area. Thus, memory references tend to be grouped into small localities of address space. The locality of reference is enhanced by the frequent occurrence of looping. The tighter the loop, the smaller the spread of references. This principle of locality is the generalization of the behavior that program references tend to be grouped into small localities of address space and that these localities tend to change only intermittently. Denning, "The Working Set Model for Programmed Behavior," Communications of ACM, Vol. 11, pp. 323-333, May 1968, applied the locality principle in the context of paged memory to formulate a so-called "working set" model. Further, he observed that the competition for memory space among processes could lead to a marked increase in the paging traffic between main and secondary memories accompanied by sudden decrease in process utilization. He still further observed that a high degree of multiprogramming makes it impossible for every process to keep sufficient pages in memory to avoid generating a large number of page faults. This means that the path connecting a backing store to a cache can be saturated in that most processes would be hung up awaiting a page transfer. This is an aspect of "thrashing." Under these circumstances, Denning proposed that each process required a certain minimum number of pages, termed "a working set," to be held in main memory before it can effectively use the CPU. If less than this number of pages were present, then the process would continually be interrupted by page faults.
Consideration should be given to the sequences of references and the effect of the presence or absence of repetitive referencing has upon a LRU organized buffer. For example, a string of single references by one process requested at a rapid rate will tend to cause its pages to go to the top of the stack and flush pages out o

REFERENCES:
patent: 235806 (1981-02-01), Mattson et al.
patent: 3806883 (1974-04-01), Weisbecker
patent: 3958228 (1976-05-01), Coombes et al.
patent: 4035778 (1977-07-01), Ghanem
patent: 4059850 (1977-11-01), Van Eck et al.
patent: 4168541 (1979-09-01), De Karske
Lang, et al., "Data Base Buffer Paging in Virtual Storage Systems", ACM Transactions on Data Base Systems, Dec. 1977, pp. 339-351.
Selinger, et al., "Access Path Selection in a Relational Data Base", Proc. 1979, Sigmod Conf. of ACM, pp. 22-34.
IBM General Information and Concepts and Installation Manuals, GH24-5012 and GH24-5013, Jan. 1981.
Denning, "The Working Set Model for Programmed Behavior", Communication of ACM, vol. 11, May 1968, pp. 323-333.
Shaw, "The Logical Design of Operating Systems", 1974, pp. 138-144.
Coffman, "Operating Systems Theory", Prentice-Hall, pp. 298-299, 1973.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Thrashing reduction in demand accessing of a data base through a does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Thrashing reduction in demand accessing of a data base through a, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Thrashing reduction in demand accessing of a data base through a will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-110961

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.