Cache memory indexing using virtual, primary and secondary color

Electrical computers and digital processing systems: memory – Address formation – Address mapping

Patent

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

711200, 711202, 711220, G06F 1200

Patent

active

060095032

DESCRIPTION:

BRIEF SUMMARY
BACKGROUND OF THE INVENTION

1. Field of the Invention
The invention refers to a cache memory with a virtually or physically indexed and physically tagged cache memory.
2. Description of Related Art
Modern processors require cache memories to bridge the gap between fast processors and slow main memories.
In direct mapped caches (see FIGS. 6 and 7), a map function is used for computing a cache index from the physical or virtual address a, thus selecting a line of the cache. Subsequently, a is compared with the address of the storage area currently associated with this cache line (the tag of the cache entry). Equality produces a hit (and the cache line is used instead of the main memory), otherwise we get a miss.
In most cases (a mod cache size)/line size is used as a map function. In this case, the complete virtual address need not be stored in the cache, but a/cache size is sufficient.
Direct mapped caches are simpler, but lead to higher miss rates than n-way set-associative caches do. These caches consist in principle of n direct mapped cache blocks which are accordingly smaller. Additionally, it is ensured that each main memory element is contained in at most one block. Since the map function indexes n cache lines each time, a maximum of n elements with map-equivalent addresses can be contained in the cache. This n-fold associativity reduces the probability of clashes and increases the hit rate correspondingly.
Physically and virtually indexed caches are well-known. In the case of the physically indexed cache (FIG. 6), the virtual address delivered by the processor is first translated into a physical address by the translation lookaside buffer (TLB). Subsequently, the cache is addressed using the physical address.
In the case of the virtually indexed cache (FIG. 7), the cache is addressed directly using the virtual address. A translation into the corresponding physical address is done only upon cache miss. The advantage of a virtually indexed cache is higher speed since the translation step to be done by the TLB is not necessary. Its disadvantages appear in the case of synonyms (aliasing) and/or multiprocessor systems.
Though a physically indexed cache does not show these disadvantages, it requires a complete address translation step (virtual.fwdarw.physical) from the TLB prior to initiating a cache access.
The cache type favored nowadays is virtually indexed and real physically) tagged. It is as fast as a virtually indexed and virtually tagged cache, but avoids most disadvantages of the latter, in particular problems with multiprocessor systems, synonyms, sharing and coherence.
A virtually indexed and physically tagged cache enables parallel execution of TLB and cache access (see FIG. 8). The instruction pipeline of the processor is therefore shorter so that the latency of an instruction is usually reduced by one cycle and the processor's performance is increased correspondingly.
The mechanism remains simple if all address bits (i) required for cache indexing are in the area of the address offsets (address within a page). Since this address part is not modified by translating the virtual address into the physical address, the cache can be addressed (indexed) even before the TLB translation step. Only at the end of a cache access and simultaneous TLB translation step, is it checked whether the physical address associated with the cache entry (the tag) matches the physical address delivered by the TLB. For this purpose, only the high-order bits of the address which are adjacent to the index part (i) need to be compared since the cache entry indexed by (i) can only be associated with addresses having index bits of value (i). Accordingly, only the high-order bits have to be stored in the cache as tag (physical address).
An n-way set-associative cache of this type can have a maximum size of n.times.2.sup.P, where 2.sup.P is the page size. Larger caches require larger pages or higher associativity.
However, a more interesting technique is page coloring. This is a method of creating pages in the physical memor

REFERENCES:
patent: 5226133 (1993-07-01), Taylor et al.
patent: 5752069 (1998-05-01), Roberts et al.
patent: 5761726 (1995-06-01), Guttag et al.
Patterson and Hennessy, "Computer Architecture A Quantitative Approach", pp. 432-448, Dec. 1990.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Cache memory indexing using virtual, primary and secondary color does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Cache memory indexing using virtual, primary and secondary color, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Cache memory indexing using virtual, primary and secondary color will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2390249

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.