Translation lookaside buffer for multiple page sizes

Electrical computers and digital processing systems: memory – Storage accessing and control – Specific memory composition

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S005000, C711S101000, C365S049130, C365S050000, C365S202000, C365S206000, C365S212000

Reexamination Certificate

active

06233652

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
This invention relates generally to computers and memory devices, and more particularly, to apparatus and methods for translating logical addresses to physical addresses.
2. Description of the Related Art
A computer or processor accesses memory locations to load or store data. To access memory, the processor uses the physical address (PA) of the data in the memory. The PA at which data is stored in a memory is not the address that a processor uses to index the data during internal manipulations. The processor hardware will assign a logical address (LA) to data being processed by instructions. The LA's and PA's are usually assigned differently so that data manipulations and memory use can both be optimized. Thus, memory accesses entail translating LA's to PA's.
A physical memory is a collection of memory pages or blocks. The PA of a memory location is given by the page address and the relative address of the memory location on that page. Typically, only the LA's of “pages” undergo translation. Relative addresses of memory locations on a page are assigned in the same way in the memory and internally in the processor.
A memory is organized a collection of pages. For example, a memory having 2
32
memory locations can be organized as 2
20
pages of 2
12
locations per page, i.e., page size of 4,056, or as 2
12
pages having 2
20
locations per page, i.e., page size of about 10
6
. Since the number of memory pages depends on the page size, the number of bits in the LA of a page depends on the page size.
FIG. 1
shows an address translator
10
of the prior art. A logical address generation unit
12
generates the LA for the memory location to be accessed. Here, the LA's are 32-bits long, because the computer has 2
32
memory locations. Line
13
transmits bits
31
to
12
of an LA, i.e., the LA of a page, to a paging unit
14
for translation. The paging unit
14
includes well-known hardware, ie., tables, for translating the LA's of pages to PA's of pages. A line
15
transmits the PA of the page to be accessed to a cache memory
16
. A line
17
transmits bits
0
to
11
of the LA, i.e., the relative position of the memory location to be accessed on the page, from the logical address generation unit
12
to the cache memory
16
without translation. The translator
10
sends the entire PA of a memory location to be accessed to the cache memory
16
, but only page addresses are translated. The translation of LA's to PA's by the paging unit
14
can be slow, because the paging unit
14
uses large page tables (not shown), which are usually stored in memory.
Referring to
FIG. 1
, the translator
10
includes a translation lookaside buffer (TLB)
18
for translating recently accessed page LA's. The TLB
18
is organized similar to a cache memory. The small size of the TLB expedites the translation of page LA's stored therein. A line
19
transmits a page LA from the logical address generation unit
12
to the TLB
18
. If the LA matches an entry of the TLB
18
, the TLB transmits the corresponding PA to the cache
16
via line
23
, i.e., a TLB hit. The TLB
18
also disables the paging unit
14
in response to a TLB
18
hit. If the LA from the line
19
does not match an entry of the TLB
18
, the slow paging unit
14
translates the LA.
Different considerations may favor either smaller or larger page sizes. Since computers frequently load entire pages into active memory, small page sizes may enable more efficient use of small active memories. Since properties such as the cacheability, writeability, and privilege levels of memory locations are often assigned to whole pages, smaller pages enable finer granularity for defining these properties. On the other, large pages use less memory to store page properties. Furthermore, large tables have shorter LA's that need less memory space used for page tables and less time to translate page LA's into PA's. The use of only one page size may not enable optimizing these conflicting advantages.
To profit from these advantages, one prior art computer employs two pages sizes and a separate TLB (not shown) for each page size. Since TLB's are memory storage devices, TLB's use substantial hardware. Employing two TLB's may represent a significant cost in terms of chip area. Furthermore, processor's employing a separate TLB for each page size may not perform translations efficiently when the vast majority of recently accessed pages have one size.
FIG. 2
illustrates second prior art translator
26
for pages of more than one size. The TLB
28
stores data indicating the sizes of the pages corresponding to addresses stored therein. The TLB
28
and the paging unit
14
transmit translated addresses via lines
23
,
15
to first data inputs of first and second multiplexers (MUX)
32
,
34
, respectively. Lines
33
,
35
transmit a portion of the LA from the logical address generation unit
12
to second data inputs of the multiplexers
32
,
34
.
Still referring to
FIG. 2
, different bits of a LA are translated to a page PA for different page sizes, because only the “page” portion of the LA is translated. For the above-described 4 giga-byte memory and pages of sizes 4,056, 8,112, and 32,448 bytes, the TLB
28
translates respective bits
31
to
12
, bits
31
to
13
, and bits
31
to
15
of a LA received from the lines
19
,
13
. Since the lines
23
transmit 20-bit output signals from the TLB
28
for any page size, some of the 20 bits of address information on the lines
23
do not correspond to the correct PA for some page sizes. The MUX
32
recombines the proper bits from the translated page address from the lines
23
with bits from the portion of the LA on the line
33
to obtain the correct upper
20
bits of the PA for the LA bits (31:12) to line
40
. Page size signals operate select inputs of the MUX's
32
,
34
to control the selection of bits from the data lines
23
and
33
so that the PA sent to the cache memory
16
is the PA corresponding to the full LA from the logical address generation unit
12
for any page size.
Still referring to
FIG. 2
, the MUX
32
is in a critical timing pathway for the translator
24
. The MUX
32
adds a sequential step of intermediate bit selection after page address translation by the TLB
28
. The selection step can slow address translation because of setup times associated with use of the multi-digit MUX
32
. The setup time includes, for example, time to generate the page size signal from the TLB
28
and the time to select one of the data inputs of the MUX
32
after receiving the correct page size signal from line
36
. For a multiple bit address signals the setup time can be greater than 10
31 9
seconds. Since memory accesses involve address translation by the TLB
28
, the time for the selecting step may have an impact on the speed of memory accesses. In modem processors operating at high frequencies the added time for multiplexing multiple bit addresses may be significant—a delay of 10
−9
may inconveniently slow memory accesses.
The present invention is directed to overcoming, or at least reducing the effects of, one or more of the problems set forth above.
SUMMARY OF THE INVENTION
In a first aspect, the invention provides for a content addressable memory (CAM). The CAM includes an input port and a plurality of locations to store page addresses for comparing to an address received on the input port. Each location includes a plurality of lower cells and at least one page size mask cell to send signals to an associated one of the lower cells. The associated one of the lower cells produces a match signal in response to either the page size mask cell sending a mask signal or to the portion page address stored therein matching a corresponding portion of the address received from the input port. Each location produces a match signal in response to each cell therein producing a match signal.
A second aspect of the invention provides

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Translation lookaside buffer for multiple page sizes does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Translation lookaside buffer for multiple page sizes, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Translation lookaside buffer for multiple page sizes will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2461638

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.