Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories
Reexamination Certificate
1997-07-22
2001-04-24
Yoo, Do Hyun (Department: 2187)
Electrical computers and digital processing systems: memory
Storage accessing and control
Hierarchical memories
C711S118000, C711S133000, C711S159000, C711S160000
Reexamination Certificate
active
06223256
ABSTRACT:
FIELD OF INVENTION
This invention relates generally to digital computer memory systems and more specifically to systems for improving hit rate in cache memory systems.
BACKGROUND OF THE INVENTION
Most computer systems employ a multilevel hierarchy of memory systems, with relatively fast, expensive, limited-capacity memory at the highest level of the hierarchy and proceeding to relatively slower, lower cost, higher-capacity memory at the lowest level of the hierarchy. Typically, the hierarchy includes a small fast memory called a cache, either physically integrated within a processor integrated circuit or mounted physically close to the processor for speed. There may be separate instruction caches and data caches. There may be multiple levels of caches. An item that is fetched from a lower level in the memory hierarchy typically evicts (replaces) an item from the cache. The selection of which item to evict may be determined by a replacement algorithm. The present patent document is concerned with replacement strategies and algorithms, as explained further below.
The goal of a memory hierarchy is to reduce the average memory access time. A memory hierarchy is cost effective only if a high percentage of items requested from memory are present in the highest levels of the hierarchy (the levels with the shortest latency) when requested. If a processor requests an item from a cache and the item is present in the cache, the event is called a cache hit. If a processor requests an item from a cache and the item is not present in the cache, the event is called a cache miss. In the event of a cache miss, the requested item is retrieved from a lower level (longer latency) of the memory hierarchy. This may have a significant impact on performance. The average memory access time may be reduced by improving the cache hit/miss ratio, reducing the time penalty for a miss, and reducing the time required for a hit. The present patent document is primarily concerned with improving the hit/miss ratio of a cache.
Ideally, an item is placed in the cache only if it is likely to be referenced again soon. Items having this property are said to have locality. Items having little or no reuse “pollute” a cache and ideally should never be placed in a cache. There are two types of locality, temporal and spatial. Temporal locality means the very same item is likely to be referenced again soon. Spatial locality means that items having addresses near the address of a recently referenced item are likely to be referenced soon. For example, sequential data streams and sequential instruction streams typically have high spatial locality and little temporal locality. Since data streams often have a mixture of temporal and spatial locality, performance may be reduced because sections of the data stream that are inherently random or sequential can flush items out of the cache that are better candidates for long term reference. Typically, the minimum amount of memory that can be transferred between a cache and a next lower level of the memory hierarchy is called a line, or sometimes a block or page. Typically, spatial locality is accommodated by increasing the size of the unit of transfer (line, block, page). In addition, if a data stream is sequential in nature, prefetching can also be used. There are practical limits to the size of cache lines, and prefetching can flush lines that may soon be reused from the cache. The present patent document is primarily concerned with strategies for ensuring that lines having the highest probability of reuse remain in a cache, and for ensuring that lines having a lower probability of reuse do not evict lines having a higher probability of reuse.
If a cache stores an entire line address along with the data and any line can be placed anywhere in the cache, the cache is said to be fully associative. However, for a large cache in which any line can be placed anywhere, the hardware required to rapidly determine if an entry is in the cache (and where) may be very large and expensive. For large caches, a faster, space saving alternative is to use a subset of an address (called an index) to designate a line position within the cache, and then store the remaining set of more significant bits of each physical address (called a tag) along with the data. In a cache with indexing, an item with a particular address can be placed only at the one place (set of lines) within the cache designated by the index. If the cache is arranged so that the index for a given address maps to exactly one line in the subset, the cache is said to be direct mapped. In general, large direct mapped caches can have a shorter access time for a cache hit relative to associative caches of the same size. However, direct mapped caches have a higher probability of cache misses relative to associative caches of the same size because many lines of memory map to each available space in the direct mapped cache. If the index maps to more than one line in the subset, the cache is said to be set associative. All or part of an address is hashed to provide a set index which partitions the address space into sets. For a direct mapped cache, since each line can only be placed in one place, no algorithm is required for replacement. In general, all caches other than direct mapped caches require an algorithm for replacement. That is, when an index maps to more than one line of memory in a cache set, we must choose which line to replace. The present patent document is concerned with any caches in which a replacement algorithm is required for determining which item in a cache to evict when a new item is added to the cache. Therefore, the present patent document is primarily concerned with set associative or fully associative caches.
Typically, a memory is organized into words (for example, 32 bits per word) and a line is typically multiple words (for example, 16 words per line). Physical main memory is also typically divided into pages (also called blocks or segments), with many lines per page. In many modern computer memory architectures, a CPU produces virtual addresses that are translated by a combination of hardware and software to physical addresses, which access physical main memory. A group of virtual addresses may be dynamically assigned to each page. Virtual memory (paging or segmentation) requires a data structure, sometimes called a page table, that translates the virtual address to the physical address. Typically, a page table entry (PTE) includes more than just an address. A PTE may include information regarding write protection, use authorization, and many other status bits and attribute bits useful to the operating system. To reduce address translation time, computers commonly use a specialized associative cache dedicated to address translation, commonly called a Translation Look-aside Buffer (TLB). A TLB entry is a cache entry, where the tag is the high order bits of the page's virtual address and the data portion is a physical page address plus the additional status bits and attribute bits stored in a PTE.
In the event of a cache miss, typically one line in a cache is replaced by the newly requested line. In the case of a direct mapped cache, a new line replaces a line at one fixed place. In the case of fully associative caches, a replacement algorithm is needed to decide which line in the cache is to be replaced. In the case of set associative caches, a replacement algorithm is needed to decide which line in a set is replaced. The algorithm for deciding which lines should be replaced in a fully associative or set associative cache is typically based on run-time historical data, such as which line is least-recently-used. Alternatively, a replacement algorithm may be based on historical data regarding least-frequently-used. Still other alternatives include first-in first-out, and pseudo-random replacement. Finally, as discussed immediately below, it may be useful to have a replacement algorithm which stores certain lines once and then locks them in place (never replaces them).
In some computer systems, some applicatio
Hewlett--Packard Company
Moazzami Nasser
Winfield Augustus W.
Yoo Do Hyun
LandOfFree
Computer cache memory with classes and dynamic selection of... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Computer cache memory with classes and dynamic selection of..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Computer cache memory with classes and dynamic selection of... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2435418