Page memory management in non time critical data buffering...

Electrical computers and digital processing systems: memory – Address formation – Address mapping

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S170000

Reexamination Certificate

active

06366996

ABSTRACT:

TECHNICAL FIELD
The present invention is directed to memory management in environments which do not require free memory pages to be located and allocated within the smallest possible number of hardware clock cycles, and do not require tasks to have access to more than a very few pages at one time. This facilitates memory management using an address table containing the smallest possible number of bits (i.e. N bits for a memory divided into N pages).
BACKGROUND
Memory management schemes typically divide a limited memory resource into small pieces, called “pages”, which are shared by independent “tasks”. Individual pages are dynamically assigned on demand for use by specific tasks. If a task no longer requires a particular page the task dynamically releases that page, freeing the page for use by other tasks. It is necessary to keep track of those pages which are free and those pages which are in use. It is also necessary to efficiently implement such schemes in hardware.
Prior art page memory management schemes commonly use “direct map”, “partially associative lookup”, or “fully associative lookup” page address translation methods. Conventionally, a “list”, “stack”, “table” or “collection” of addresses of available pages is kept. The collection is dynamically updated to provide a continuously current indication of the addresses of both free pages and in-use pages.
For example, in the direct map method, the collection consists of a series of two part logical addresses, as shown in FIG.
1
. The upper (left most) part of each address is a “Logical Page Number” (LPN) specifying a particular Physical Page Number (PPN), and the lower (right most) part is an “Index” specifying a location within that page. In order to “bind” an LPN to a particular PPN, a translation memory (termed “Page Lookup” in
FIG. 1
) is used. Typically, if the available memory consists of a total of N pages, the Page Lookup translation memory contains exactly N entries for a one-to-one mapping between LPN's and PPN's, although it is possible to have more than N entries in a Page Lookup memory (e.g. to permit a larger address to be redundantly mapped into N pages). Note that this has nothing to do with the concept of “virtual memory”, in which a much larger logical address can be bound to N pages. The present invention can however be generalized to encompass virtual memory management schemes.
The “partially associative lookup” method recognizes that if the logical addresses are large enough, it may be desirable to divide them into three parts rather than two (as is normally done in virtual memory management systems). The third part is called the “segment” portion of the address. This complicates the
FIG. 1
Page Lookup memory, because it must be extended to store the correct physical segment number corresponding to each PPN. This scheme implies that there are more logical pages than physical pages. For faster address translation, the Page Lookup memory can be divided into M smaller blocks, each of which can be accessed in parallel. This allows simultaneous comparison (termed “associative lookup”) of M physical segment values with the logical segment value to find a match. A priority encoder of the type described below can be used to select the correct matching values. Since not all physical segment numbers are simultaneously compared, the scheme is called “partially associative”, and can be faster than purely direct mapped techniques. However, the present invention is directed to situations in which speed is not necessarily of the essence, so the complications of three part logical addresses and associative lookup techniques can be avoided.
The “fully associative lookup” method extends the partially associative lookup method to perform a simultaneous comparison of the combined segment and LPN portions of an address. Instead of involving a table lookup operation, this employs a fully associative or “content addressable” address translation method. It is not necessary to further explain the complexities of this technique, because they too can be avoided by employing the present invention.
A primary disadvantage of prior art page address translation schemes is that the collection of addresses of available pages consumes much more memory than is absolutely necessary. This precludes use of prior art memory management schemes in applications in which memory is a scarce resource, such as applications embedded on a single integrated circuit chip. For example, if the available memory consists of a total of N pages, then an address table comprising N*Log
2
N bits is required to implement page address translation via direct lookup in an address table. Implementation of page address translation via a free page list or stack also requires an address table comprising N*Log
2
N bits (i.e. N entries each having a width of Log
2
N bits).
A primary objective of prior art schemes is to provide rapid access to the collection to facilitate location and allocation of the next available free page as quickly as possible (e.g. within a few hardware clock cycles). However, it is not always necessary to allocate new pages within the smallest possible number of hardware clock cycles. Furthermore, some applications need not “see” (or access) more than a few pages (possibly as few as one or two pages) at a time. By exploiting these factors, the present invention facilitates management of a collection of pages using an address table containing the smallest possible number of bits (i.e. N bits for a memory divided into N pages). The invention also provides a method of accessing a small number of pages via simple multiplexed address selection. The invention is well suited to use in embedded chip applications which pass data through a set of data storage buffers, as is common in a packet switched network environment.
SUMMARY OF INVENTION
The invention provides a method of finding, within K clock cycles, a next free page contained within an N page memory. An N-bit UF Table is provided. Each bit is either zero, indicating that a corresponding memory page is in use; or, non-zero, indicating that the corresponding page is free for use. The UF Table is divided into K separate entries, each containing M=N/K bits. Upon receipt of a request for the next free memory page, the UF Table bits are sequentially examined to detect the first non-zero bit. If a non-zero bit is detected, that bit's position within the UF Table is returned as an output value. If no non-zero bit is detected, an error indication is returned as the output value.
After each UF Table bit is examined, a counter value is incremented by one. The output value is obtained by concatenating the counter value with a bit position value representative of the position of the detected bit within the UF Table entry corresponding to the counter value.
The next free page is allocated for use by overwriting the detected bit with a value of zero to indicate that the next free page is in use.
An address specifier is provided to identify a plurality of addressable memory locations. The address specifier includes a page specifier for specifying one page within a contiguous group of the in use pages. This facilitates switching from an addressable location on one page within the contiguous group to an addressable location on another page within the contiguous group by incrementing the page specifier.
The invention further provides a method of deriving a physical memory address from an address specifier which identifies two or more of the physical memory addresses. The address specifier comprises a lower order bit index, one or more overflow bits in bit positions next most significant to the lower order bit index, and two or more page specifiers. The two or more page specifiers are multiplexed to select the one which corresponds to the value of the over-flow bits. The selected page specifier is then concatenated with the lower order bit index to produce the physical memory address.


REFERENCES:
patent: 3768080 (1973-10-01), Boger et al.
patent: 4403283 (1983-09-01), Myntti et al.
pate

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Page memory management in non time critical data buffering... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Page memory management in non time critical data buffering..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Page memory management in non time critical data buffering... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2838579

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.