System and method for translation buffer accommodating...

Electrical computers and digital processing systems: memory – Address formation – Address mapping

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S205000, C711S216000

Reexamination Certificate

active

06625715

ABSTRACT:

TECHNICAL FIELD OF THE INVENTION
The present invention relates generally to computer systems having virtual memory addressing, and in particular the present invention relates to such computer systems have a translation lookaside buffer (TLB) or similar cache for use with virtual memory addressing.
BACKGROUND OF THE INVENTION
Virtual memory addressing is a common strategy used to permit computer systems to have more addressable memory than the actual physical memory installed within a given computer system. Data is stored on a storage device such as a hard disk drive and is loaded into physical memory as needed typically on a memory page-by-memory page basis, where a memory page is a predetermined amount of contiguous memory. Computer systems having virtual memory addressing must translate a given virtual memory address to a physical memory address that temporarily corresponds to the virtual address.
In many such computer systems, translation is accomplished via a translation lookaside buffer (TLB), also known by those skilled in the art as a TC (translation cache). The TLB is a cache located preferably near the processor of the computer system in order to improve the access speed and also holds virtual page-to-physical page mappings most recently used by the processor. The TLB entries may be cached entries from a page table or translations created and/or inserted by the operating system. The translation of virtual to physical addresses commonly are a critical path in computer performance. Conventional TLB organizations well-known to those skilled in the art include direct-mapping in which an entry can appear in the TLB in only one position, fully associative mapping in which an entry can be placed anywhere in the TLB, and set-associative in which an entry can be placed in a restricted set of places in the TLB where a set is a group of entries in the cache and an entry can be placed anywhere within the set.
Fully associative TLBs conventionally include a Content Addressable Memory (CAM) array and a Random Access Memory (RAM) array. CAM, also known as “associative memory” is a kind of storage device which includes comparison logic with each bit of storage. A data value is broadcast to all words of storage and compared with the values there. Words which match are flagged in some way. Subsequent operations can then work on flagged words and/or data linked to those flagged words, e.g. read them out one at a time or write to certain bit positions in all of them.
Set-associative TLBs conventionally include decoders, RAM arrays, and comparators. Part of the virtual address is used by the decoder to determine which entries in the RAM array may contain a corresponding physical address translation. The remainder of the virtual address is typically used along with a tag stored in the RAM array (each RAM array entry has a corresponding tag) by the comparator to determine a specific entry to be used for translation. Set-associative TLBs tend to be faster to access than fully associative TLBs due to the use of decoders rather than CAM arrays.
Conventional TLBs are designed to work with a fixed page size, such as a 4K (1K=1024 bytes) page size, a 16K page size, or a 256K page size. This is less than optimal because memory space on conventional personal computers (PCS) is designed in a manner wherein different address ranges have differing page granularity requirements. For example, on a PC, physical memory space between addresses 640K and 1M (1M=2{circumflex over ( )}20 bytes) need 4K-8K granularity to support partitions for read-only memories (ROMs), hard disk interfaces, graphics interfaces, etc., but physical memory space below 640K and above 1M is random-access memory (RAM), which would be more efficiently mapped with larger page sizes.
A conventional solution is to use multiple TLBs in which at least one TLB is implemented for each page size of addressable memory space. For example, one TLB is implemented for memory space that is addressed via 4K page sizes and another TLB is implemented for memory space that is addressed via 16K page sizes. This is problematic because all TLBs must be referenced for each virtual address (slower than referencing a single TLB), the method allows creation of multiple (overlapping) entries representing the same virtual address, and the Operating System (OS) is limited to a small set of possible page sizes.
Another conventional solution is to implement one TLB using a page size of the smallest page size needed, such as 4K in the above example of a conventional microprocessor. However, this is problematic in that many more entries in the TLB will be needed to describe the portions of memory that are addressed in larger page sizes. For example, eight entries would be needed in a TLB to describe every 32K page of memory if the TLB uses a page size of 4K. If the number of entries in the TLB is increased to accommodate the requirement of more entries, this results in slower performance because searching a larger TLB is slower than searching a smaller TLB. If the number of entries in the TLB is not increased, then the number of “misses” will increase (the case in which a given virtual address has no corresponding entry in the TLB), thus causing hardware or the OS to spend a significant number of cycles retrieving the missing translation before program execution can resume. Because the translation of virtual to physical addresses are a bottle-neck in the speed of computers, it is critical that the translation be accomplished quickly.
Therefore, a need exists for a single fast TLB that can accommodate multiple page sizes quickly.
SUMMARY OF THE INVENTION
The system identifies virtual addresses as including three portions; a virtual fixed page address in the upper bits of the address word that is always used for identification of the page; an offset address in the lower bits of the address word that is always used for identification of the page offset; and a variable page address between the virtual fixed page address and the offset, that identifies either page address or offset address, depending on the size of the page corresponding to the virtual address word.
In one embodiment of a method of the present invention, the system receives a virtual address and page size bias for the virtual address and outputs a corresponding physical address. The page size bias is used in the look-up of the physical address. During intermediate stages of the virtual to physical address translation, according to the look-up of the virtual address and page size bias, a page size mask and physical page address are generated. The page size mask indicates what portion of the virtual address describes the address of the virtual page in memory space, and what portion of the address represents an offset within the virtual page. Since the physical page size and virtual page size are the same, the page size mask similarly indicates what portion of the physical page address generated describes the translated virtual page address and is to be used as physical address output and what portion of the physical page address should be masked (because it is not part of the page address) and replaced with the virtual address offset within the page. The final physical address consists of the unmasked portion of the physical page address concatenated with the virtual address offset within the page (the offset within the page is not translated).
In one embodiment of an apparatus, the present invention generates a set of entry selects according to a virtual address and page size bias supplied, generates a physical page address from an entry selected by the entry selects in a first array, generates a virtual address tag from an entry selected by the entry selects in a first array, generates a page size mask from an entry selected by the entry selects in a first array, and generates a match signal from a comparison of the variable page address supplied with a corresponding entry selected by the entry selects in a second array (the match signal is also qualified with a valid bit contained within

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

System and method for translation buffer accommodating... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with System and method for translation buffer accommodating..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and System and method for translation buffer accommodating... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3095785

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.