System for efficiently maintaining translation lockaside...

Electrical computers and digital processing systems: memory – Address formation – Address mapping

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S202000, C711S206000, C711S208000

Reexamination Certificate

active

06490671

ABSTRACT:

BACKGROUND OF THE INVENTION
Modern large computer systems contain memory management hardware and addressing systems to provide software processes (“processes”) with access to a defined range of physical memory, e.g., hardware volatile memory. Operating systems within such computer systems provide data structures which define and maintain virtual memory spaces which can be private to each process. A process's virtual memory space is then used transparently by the process to access data that is actually maintained within the physical memory space.
Such data structures provide a mapping between the addresses of the virtual memory space used by the process and the addresses of the physical memory space used by the computer system's hardware memory management system which actually maintains the data in the physical memory. A typical such computer system can provide memory mappings for as many as 500 or more processes simultaneously.
One such data structure is a page table which is itself maintained in the physical memory. A disadvantage resulting from the page table being in physical memory is that each access to it requires costly consumption of bus bandwidth.
To avoid these costly page table accesses, modem CPUs typically have local translation lookaside buffers, or TLBs. A TLB is a relatively small, high-speed memory cache which stores virtual address to physical address mappings close to the CPU. After a mapping is found in the page table, it is copied into the TLB so future accesses to the same virtual address do not require a page table lookup.
In a multi-processor machine, a process may be divided into threads of execution, some threads executing on different CPUs. All of a process's threads share a common virtual address space. Each of the CPUs, however, maintains its own copy of the TLB. When any of the CPUs in the machine invalidates an entry in the TLB, each CPU is notified, traditionally by means of a hardware interrupt, that there has been a change to the TLB and refreshes its copy. Invalidating TLB entries, however, is very expensive, because all of the CPUs on the machine stop their processing to perform the refresh.
SUMMARY
The cost of invalidating TLB entries in a multiprocessor system can be minimized by processing them in batches. Mappings invalidated by any of the processors are marked as dirty and tracked by a driver process. Once the number of dirty mappings exceeds a certain predetermined threshold, the driver batches the mappings together and passes the list of dirty mappings to all of the processors. If a dispatch level routine, which has higher execution priority than ordinary user routines, is used to notify the processors, it is scheduled to execute on all of the processors immediately. On each processor, TLB entries corresponding to the batched dirty mappings are identified and invalidated.
By leveraging database application semantics to memory management, effective and sizable TLB batches can be formed without hindering performance. Because the driver responsible for forming the batch is also the interface for providing the database user processes memory, the identified TLB entries are efficiently invalidated on an on-demand basis.
Accordingly, a method for maintaining virtual memory consistency in a multi-processor environment comprises allocating a subset of virtual memory to a process, and mapping the subset of virtual memory to a first subset of physical memory. A memory mapping mechanism such as a translator lookaside buffer (TLB) is maintained in each processor, each TLB comprising a plurality of TLB entries. Each TLB entry comprises mapping information with respect to a mapping between a virtual address in the subset of virtual memory and a physical address in the first subset of physical memory. When the subset of virtual memory is to be remapped to a different, or second, subset of physical memory, a reference to the first subset of physical memory is placed into a free list, and marked as dirty. When the number of dirty references exceeds a predetermined threshold, the corresponding entries in each processor's TLB are invalidated. Alternatively, all TLB entries can be invalidated.
In accordance with certain embodiments, a subset of virtual memory is allocated to a process, and mapped to a first subset of physical memory. Memory mappings which should be invalidated according to some algorithm such as a Least Recently Used (LRU) algorithm, are identified. When the number of identified memory mappings equals or exceeds a predetermined threshold, a list of the identified memory mappings is accessed by each processor, and at each processor, TLB entries corresponding to the identified memory mappings are batched together and invalidated. The list can be maintained by the same driver process which provides memory for database user processes. Each processor is notified, via a dispatch-level routine, for example, to invalidate the batched TLB entries.
TLB entries can be invalidated at an essentially regular interval, for example, at about once every second. This can be accomplished by triggering invalidation of the TLB entries when the number of dirty mappings exceeds a predetermined threshold, such as 2000, where it is known that mappings occur at a reasonably regular rate. The particular threshold number, however, is dependent on particulars of the computing system.
In accordance with certain embodiments, the system also maintains a free list which includes a plurality of free list entries. Each free list entry includes a reference to virtual memory which is either unmapped or whose mapping is marked as dirty. The free list entries marked as dirty can be tracked, for example, in a hash table. Upon determining that a mapping referenced by a dirty free list entry is needed by a particular processor, the dirty entry can be removed from the free list.
The system can further maintain a page table which includes a plurality of page table entries (PTEs). Each PTE includes a mapping from a virtual address to a physical address. When a process thread executing on a processor accesses a virtual address, the processor first searches the virtual address in the processor's respective TLB. If no valid TLB entry holds a mapping for the virtual address, the processor can search the page table for the virtual address. Upon finding a valid mapping in the page table for the virtual address, the processor can copy the mapping to the processor's TLB.


REFERENCES:
patent: 4779188 (1988-10-01), Gum et al.
patent: 5317705 (1994-05-01), Gannon et al.
patent: 5437017 (1995-07-01), Moore et al.
patent: 5574878 (1996-11-01), Onodera et al.
patent: 5710903 (1998-01-01), Horiuchi et al.
patent: 5790851 (1998-08-01), Frank et al.
patent: 5809522 (1998-09-01), Novak et al.
patent: 5860144 (1999-01-01), Frank et al.
patent: 5906001 (1999-05-01), Wu et al.
patent: 5956754 (1999-09-01), Kimmel
patent: 6105113 (2000-08-01), Schimmel
patent: 6119204 (2000-09-01), Chang et al.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

System for efficiently maintaining translation lockaside... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with System for efficiently maintaining translation lockaside..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and System for efficiently maintaining translation lockaside... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2940241

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.