Methods and systems for extending an application's...

Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S118000, C711S133000, C711S136000, C711S141000, C711S142000, C711S143000

Reexamination Certificate

active

06832295

ABSTRACT:

FIELD OF THE INVENTION
The present invention relates to methods and systems for managing virtual memory remapping. More particularly, the invention is directed to efficient methods and systems for managing the extension of an applications's address space by remapping virtual memory. The invention also relates to methods and systems for flushing memory caches in a multi-processor environment.
BACKGROUND OF THE INVENTION
Virtual memory management techniques are well known in the art. Memory is primarily made up of fast local memory (e.g., Random Access Memory “RAM”) and slower external memory (e.g., magnetic or optical disks). Local memory may be further divided into very high speed memory, usually embodied in small cache memory, and somewhat slower main memory. The available size of external memory is limited only by the capacities of the disks present on the system, while the size of local memory is limited by the addressing capabilities of the processor.
Modern processor architectures typically provide virtual memory functionality. Virtual memory gives an application the illusion of having a very large, linear address space, significantly reducing the complexity of application memory management. At the same time, the operation system, which has the responsibility for managing virtual memory mappings, has the discretion to remove memory from the application's address space when it determines that the memory may be better used elsewhere in the system. The combination of ease of use for application programming and flexibility in resource allocation in the operating system has proven to be powerful, leading to the popular adoption of virtual memory systems.
A virtual memory manager in an operating system determines what virtual memory is mapped to physical memory and what virtual memory will be unmapped from the faster physical memory and stored in slower external memory.
The operating system initializes the memory management data structures which will be used by the processor and operating system to translate virtual addresses into corresponding physical or external addresses. A virtual memory address is typically converted into a physical memory address by taking the high order bits of the virtual memory address and deriving the virtual page number. To determine the physical page which maps a virtual page, the virtual page number is indexed into a table (often called the pagetable), which specifies, among other things, the physical page number mapped by that virtual page mapping. The physical page number is then concatenated with the low order bits of the virtual memory address—the byte offset within the page—to produce the complete address in physical memory corresponding to the original virtual address. The processor performs this virtual-to-physical address translation process during program execution. The operating system manages the pagetable, modifying it as appropriate, to indicate to the processor which virtual pages map to which physical pages. The processor will then reference the physical memory addresses thus produced, to reference program code or data.
Typical 32-bit processors have 32-bit physical addressing capabilities. This implies that such a 32-bit processor may address up to 232 bytes of physical memory, or 4 GB. Similarly, 32-bit processors typically support 32-bit virtual addressing, yielding 4 GB virtual address spaces. Operating systems supporting virtual memory typically reserve between ¼ and ½ of the total virtual address space provided by any given processor architecture, for storage of system-wide operating system data, such as device data structures and the file system cache. The remainder of the virtual address space may be used for application virtual memory. For a typical 32-bit processor with 32-bit virtual address, an application thus has access to roughly 2 to 3 GB of virtual address space for application use —including buffer space, application code, heap space, stack space, per-process control data, and the like.
Server applications such as database servers or mail servers typically require large virtual address spaces to support high throughput rates with large numbers of connected clients. These address spaces may contain caches of user data buffers, allowing the application to increase throughput by performing operations in main memory rather than performing slower external disk I/O, thereby increasing throughput. Once these applications have fully utilized the 2 to 3 GB of application virtual address space, further gains in throughput will ordinarily be impossible, since additional memory must be stored on disk rather than in main memory. However, depending on operating system activity, physical memory may be available in abundance—that is, there may be significant amounts of physical memory in the system which is not mapped into any address space, or is lightly used, and thus is available for use elsewhere.
But, since the application's address space is fully consumed, there is no place in which to effectively use the memory to benefit the server application. The net effect is that application throughput is bottlenecked due to lack of accessible application memory space. A mechanism that provides a means for an application that has exhausted its virtual address space to allocate and access a large additional tier of main memory, even if such access is somewhat more expensive than standard application-mapped virtual memory, is therefore needed.
Computations associated with translating a virtual address into a physical address, internal to the processor's execution engine, can be expensive, due to the multiple memory references involved and the associated logical manipulations. To reduce this overhead, processors usually have translation look-aside buffers (“TLBs”). TLBs are small caches made of fast, associative memory, internal to the processor. TLBs maintain a list of the most recently used virtual memory addresses along with their corresponding physical memory addresses. When the operating system changes the mapping of a virtual page by modifying the pagetable which stores the virtual-to-physical memory address translation data, the operating system must notify the processor(s) to flush the old virtual-to-physical memory mapping, if it exists, from the TLB.
If the physical memory address mapped to a virtual memory address is modified without the TLB being flushed, then an incorrect or invalid memory location may be accessed by an application, potentially resulting in data corruption or application failure. It is, therefore, critical that the operating system's virtual memory manager flush stale TLB entries with certainty, to protect data integrity.
In a multi-processor (“MP”) computer system, not just the TLBs in the local processor must be updated when a virtual-to-physical mapping changes. In fact, all processors that are executing code in the subject address space (referencing in any way the soon-to-be modified pagetable) must have their local TLBs flushed. One standard technique of effecting this update operation is to send an interprocessor interrupt to each affected processor in the system. During the period of time the target processor(s) are handling the interrupt, other, lower priority processor activities are blocked, for a period of numerous processor cycles, while the target processor saves its state, further dispatches the interrupt, flushes its TLB, acknowledges that the TLB has been flushed, restores its pre-interrupt state, then continues with its previous work.
Meanwhile the processor that initiated the change to the virtual memory mapping performs a busy-wait operation (not doing any application work), waiting for the other processors that must flush their TLB, to acknowledge that the TLB flush operation is complete. The entire TLB flush operation, counting all the cycles on all processors involved and the bus cycles involved in communicating between processors, can be very expensive. A TLB flush operation is overhead, since the application makes no forward progress for the dura

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Methods and systems for extending an application's... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Methods and systems for extending an application's..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Methods and systems for extending an application's... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3308882

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.