Electrical computers and digital processing systems: memory – Address formation – Address mapping
Reexamination Certificate
2001-07-13
2003-02-18
Yoo, Don Hyun (Department: 2187)
Electrical computers and digital processing systems: memory
Address formation
Address mapping
C711S202000, C711S203000, C711S207000, C711S170000, C711S173000
Reexamination Certificate
active
06523104
ABSTRACT:
BACKGROUND OF THE INVENTION
1. Field of the Invention
This invention relates in general to the field of memory management within a computing system, and more particularly to an apparatus and method that extends the capabilities of a software-controlled memory management unit such that the sizes of virtual memory pages can be programmed beyond that provided for under legacy operating system software, while at the same time preserving compatibility of the memory management unit with the legacy operating system software.
2. Description of the Related Art
Virtual memory management techniques were developed during the mid-1970's specifically to address a number of problems experienced in early computing systems related to the execution of programs from memory and the storage of data associated with program execution. Virtual memory management is typically accomplished by providing a memory management unit (MMU) within a computing processing unit (CPU) that serves as an intermediary between address generation logic in the CPU and memory access logic. Under a virtual memory management scheme, application program instructions cause virtual address to be generated by the address logic. The MMU then translates the virtual addresses into physical addresses according to a predefined and configurable memory mapping strategy. The physical addresses are used by the access logic in the CPU to access locations in system memory. Virtual memory management techniques enable the operating system of a computing system to effectively control where application programs are loaded and executed from memory, in addition to providing a means whereby memory can be allocated to a program while it is executing and then released back into the memory pool when the memory is no longer required.
Almost all present day virtual memory techniques divide a CPU's address space into equal-sized blocks called memory pages. Allocating memory to programs in these equal-sized memory pages minimizes fragmentation effects and decreases the number of virtual address bits that must be translated. To access a memory page only requires translation of the upper bits of a virtual address; the lower bits of the virtual address are not translated and merely provide an offset into a memory page. The virtual-to-physical address mapping information, along with other information specifying the attributes (e.g., access protection features) of memory pages, are stored in a designated area of memory known as a page table. And to preclude a page table access each time address translation is required, frequently used page table entries are stored within the MMU in a fast cache known as a translation lookaside buffer (TLB).
Since each generated address in a virtual memory scheme must be translated, TLBs and associated translation logic are critical timing elements in the execution path of a CPU. Accordingly, address translation elements are designed to be extremely fast and efficient, providing only those functions that are essential to the translation of addresses and specification of corresponding physical memory page attributes.
Virtual memory management techniques are extremely powerful, but because of the address translation overhead associated with the generation of every virtual address, employment of these techniques has not migrated into application areas that comprise a few, relatively small, embedded application programs executing on a CPU. Hence, present day MMU/TLB designs provide for memory page sizes that are commensurate with numerous, medium to large application programs executing, say, on a desktop system or workstation. It is quite uncommon today to find page sizes in a virtual memory system that are smaller than 4 KB.
Recent advances in device scaling and fabrication, however, are now enabling manufacturers to provide CPU designs that can absorb the timing and area overhead associated with address translation, thus opening up virtual memory management as an option to the embedded processing world. Yet, while virtual memory management will provide advantages to embedded applications, it is a well known fact that size, cost, and power constraints generally imposed on embedded processing systems result in designs whose memory use must be controlled more stringently than their larger counterparts.
Accordingly there is a need in the art for virtual memory management techniques and methods that provide for page sizes smaller than 4 KB.
In addition, to retain existing customer bases for current virtual memory products, there is a need for these upgraded/improved virtual memory management products to preserve compatibility with legacy memory management software.
SUMMARY OF THE INVENTION
The present invention provides a superior technique for extending the capabilities of existing memory management systems, while at the same time retaining compatibility of these systems with operating system software that implements legacy memory management protocols.
In one embodiment, an apparatus is provided that enables system designers to have programmable minimum memory page sizes. The apparatus has a pagegrain register and a memory mangement unit (MMU). The pagegrain register prescribes a minimum page size, in default, according to a legacy memory management protocol, and in alternative, as one of the programmable minimum memory page sizes according to an extended memory management protocol. The memory management unit (MMU) is coupled to the pagegrain register. The MMU stores a plurality of page table entries (PTEs). Each of the plurality of PTEs specifies a page granularity for a corresponding physical memory page, where the page granularity is bounded by the minimum page size. The MMU has page granularity logic that is configured to determine a page size for the corresponding physical memory page. The page size is determined based on the minimum page size and the page granularity.
One aspect of the present invention features a computer program product for use with a computing device. The computer program product includes a computer usable medium, having computer readable program code embodied in the medium, for causing a CPU to be described, the CPU being capable of sizing memory pages according to a legacy memory management technique or an extended memory management technique. The computer readable program code includes first program code, second program code, and third program code. The first program code describes a memory management unit (MMU). The MMU stores page table entries (PTEs). Each of the PTEs prescribes a page granularity for a corresponding physical memory page, where the page granularity is bounded by a minimum page size. The second program code describes page granularity logic within the MMU. The page granularity logic establishes a page size for the corresponding physical memory page, where the page size is established as a function of a minimum page size and the page granularity. The third program code describes a pagegrain register that is coupled to the MMU. The pagegrain register specifies the minimum page size, in default, according to the legacy memory management technique, and in alternative, according to the extended memory management technique.
Another aspect of the present invention contemplates a computer data signal embodied in a transmission medium. The computer data signal has first computer-readable program code, second computer-readable program code, and third computer-readable program code. The first computer-readable program code describes a memory management unit (MMU). The MMU stores page table entries (PTEs). Each of the PTEs specifies a page granularity for a corresponding physical memory page, where the page granularity is bounded by a minimum page size. The second computer-readable program code describes page granularity logic within the MMU. The page granularity logic determines a page size for the corresponding physical memory page, where the page size is determined based on the minimum page size and the page granularity. The third computer-readable program code describes a pagegrain
Huffman James W.
Huffman Richard K.
MIPS Technologies Inc.
Namazi Mehdi
Yoo Don Hyun
LandOfFree
Mechanism for programmable modification of memory mapping... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Mechanism for programmable modification of memory mapping..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Mechanism for programmable modification of memory mapping... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3179003