Electrical computers and digital processing systems: memory – Addressing combined with specific memory configuration or... – Addressing cache memories
Patent
1997-02-03
1999-06-15
Swann, Tod R.
Electrical computers and digital processing systems: memory
Addressing combined with specific memory configuration or...
Addressing cache memories
711210, 711203, 711206, 711207, G06F 1208
Patent
active
059132221
DESCRIPTION:
BRIEF SUMMARY
FIELD OF THE INVENTION
The invention refers to a cache memory device with a cache memory that is indexed virtually, the cache entries thereof being tagged with physical (real) addresses.
BACKGROUND OF THE INVENTION
Modern processors require cache memories in order to bridge the gap between fast processors and slow main memories.
Physically and virtually indexed caches are known. In a physically indexed cache (FIG. 7), the virtual address supplied by the processor is first translated into a physical address by the Translation Lookaside Buffer (TLB). Then, the cache is addressed using this physical address.
In a virtually indexed cache (FIG. 8), the cache is directly addressed by the virtual address. A translation into the corresponding physical address only takes place in the case of a cache miss. The advantage of a virtually-indexed cache lies in the higher speed, since the translation step by the TLB is omitted. Its disadvantage shows when it comes to synonyms or aliasing.
Direct-mapped caches use a map function (as shown in FIGS. 7 and 8) to calculate a cache index from the physical address or the virtual address a and to select a line of the cache therewith. Then, a is compared to the address of the memory area (the tag of the cache entry) presently associated with this cache line. In the case of identity, there is a hit (and the cache line is used instead of the main memory), otherwise there is a miss. Mostly, (a mod cache size)/line size is used as the map function. To this end, not the complete virtual address must be stored in the cache, but a/cache size will be sufficient.
Direct-mapped caches are simpler but cause higher miss rates than n-way caches. Basically, these consist of n correspondingly smaller direct-mapped cache blocks. It is made sure that each main memory element is located in one block at most. Since the map function indicates n cache lines, the cache can contain up to n elements with map equivalent addresses. This n-fold associativity reduces the probability of clashes and increases the hit rate accordingly.
The cache type preferred at present is a virtually-indexed and real (physically) tagged cache. It is just as fast as a virtually-indexed and virtually-tagged cache, yet it avoids most of the disadvantages thereof, in particular the problems with multi-processor systems, synonyms, sharing and coherence.
It is true that a physically indexed cache is free from these disadvantages as well, but it requires a complete address translation step (virtual.fwdarw.real) by the TLB before the cache access can be initiated. On the other hand, a virtually-indexed and physically tagged cache allows for parallel TLB and cache accesses (see FIG. 9). Therefore, the instruction pipeline of the processor is shorter so that the idle time of an instruction, as a rule, decreases by one clock with the processor performance increasing accordingly.
The mechanism remains simple as long as all of the address bits (i) necessary for indexing the cache are located within the range of the address offset (address within a page). Since this address portion is not changed by the translation of the virtual address into the physical address, the cache can be addressed (indexed) thereby even before the translation step of the TLB. Only at the end of the cache access and the parallel TLB translation step is it checked, whether the physical address (the tag) associated with the cache entry is identical with the physical address supplied by the TLB. In doing so, only the most significant bits of the address that are contiguous with the index portion (i) have to be compared, since the cache entry indexed by (i) can only be associated with addresses the index bits of which have the value (i). Accordingly, only the most significant bits have to be stored in the cache as the tag (physical address).
An n-way set-associative cache of this type may only be up to n.times.2.sup.P in size, where 2.sup.P is the page size. The cache size may be increased by larger pages or increased associativity.
However, an interesting techni
REFERENCES:
patent: 4400774 (1983-08-01), Toy
patent: 5226133 (1993-07-01), Taylor et al.
patent: 5584002 (1996-12-01), Emma et al.
patent: 5668968 (1997-09-01), Wu
GMD-Forschungszentrum Informationstechnik GmbH
Lee Felix B.
Swann Tod R.
LandOfFree
Color correction method in a virtually addressed and physically does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Color correction method in a virtually addressed and physically , we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Color correction method in a virtually addressed and physically will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-410595