Cache including a prefetch way for storing cache lines and...

Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S003000, C711S118000, C711S154000, C712S237000

Reexamination Certificate

active

06219760

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention is related to computer systems and, more particularly, to prefetching and caching mechanisms within computer systems.
2. Description of the Related Art
Superscalar microprocessors achieve high performance by executing multiple instructions per clock cycle and by choosing the shortest possible clock cycle consistent with the design. On the other hand, superpipelined microprocessor designs divide instruction execution into a large number of subtasks which can be performed quickly, and assign pipeline stages to each subtask. By overlapping the execution of many instructions within the pipeline, superpipelined microprocessors attempt to achieve high performance.
Superscalar microprocessors demand high memory bandwidth due to the number of instructions attempting concurrent execution and due to the increasing clock frequency (i.e. shortening clock cycle) employed by the superscalar microprocessors. Many of the instructions include memory operations to fetch (read) and update (write) memory operands. The memory operands must be fetched from or conveyed to memory, and each instruction must originally be fetched from memory as well. Similarly, superpipelined microprocessors demand high memory bandwidth because of the high clock frequency employed by these microprocessors and the attempt to begin execution of a new instruction each clock cycle. It is noted that a given microprocessor design may employ both superscalar and superpipelined techniques in an attempt to achieve the highest possible performance characteristics.
Microprocessors are often configured into computer systems which have a relatively large, relatively slow main memory. Typically, multiple dynamic random access memory (DRAM) modules comprise the main memory system. The large main memory provides storage for a large number of instructions and/or a large amount of data for use by the microprocessor, providing faster access to the instructions and/or data than may be achieved from a disk storage, for example. However, the access times of modern DRAMs are significantly longer than the clock cycle length of modern microprocessors. The memory access time for each set of bytes being transferred to the microprocessor is therefore long. Accordingly, the main memory system is not a high bandwidth system. Microprocessor performance may suffer due to a lack of available memory bandwidth.
In order to allow high bandwidth memory access (thereby increasing the instruction execution efficiency and ultimately microprocessor performance), computer systems typically employ one or more caches to store the most recently accessed data and instructions. Additionally, the microprocessor may employ caches internally. A relatively small number of clock cycles may be required to access data stored in a cache, as opposed to a relatively larger number of clock cycles are required to access the main memory.
High memory bandwidth may be achieved in a computer system if the cache hit rates of the caches employed therein are high. An access is a hit in a cache if the requested data is present within the cache when the access is attempted. On the other hand, an access is a miss in a cache if the requested data is absent from the cache when the access is attempted. Cache hits are provided to the microprocessor in a small number of clock cycles, allowing subsequent accesses to occur more quickly as well and thereby increasing the available bandwidth. Cache misses require the access to receive data from the main memory, thereby lowering the available bandwidth.
In order to increase cache hit rates, computer systems may employ prefetching to “guess” which data will be requested by the microprocessor in the future. The term prefetch, as used herein, refers to transferring data (e.g. a cache line) into a cache prior to a request for the data being received by the cache. A “cache line” is a contiguous block of data which is the smallest unit for which a cache allocates and deallocates storage. Generally, prefetch algorithms are based upon the pattern of accesses which have been performed by the microprocessor. If the prefetched data is later accessed by the microprocessor, then the cache hit rate may be increased due to transferring the prefetched data into the cache before the data is requested.
Unfortunately, cache hit rates may be decreased (or alternatively cache miss rates increased) by performing prefetching if the data being prefetched is not later accessed by the microprocessor. A cache is a finite storage resource, and therefore the prefetched cache lines generally displace cache lines stored in the cache. When a particular prefetched cache line displaces a particular cache line in the cache, the prefetched cache line is not later accessed by the microprocessor, and the particular cache line is later accessed by the microprocessor, then a miss is detected for the particular cache line. The miss is effectively caused by the prefetch operation. The process of displacing a later-accessed cache line with a non-referenced prefetched cache line is referred to herein as cache pollution. A mechanism for performing prefetch without incurring cache pollution is desired.
SUMMARY OF THE INVENTION
The problems outlined above are in large part solved by a cache in accordance with the present invention. The cache employs one or more prefetch ways for storing prefetch cache lines and one or more ways for storing accessed cache lines. Prefetch cache lines are stored into the prefetch way, while cache lines fetched in response to cache misses for requests initiated by a microprocessor connected to the cache are stored into the non-prefetch ways. Advantageously, accessed cache lines are maintained within the cache separately from prefetch cache lines. When a prefetch cache line is presented to the cache for storage, the prefetch cache-line may displace another prefetch cache line but does not displace an accessed cache line. In other words, cache pollution is avoided by storing prefetch cache lines separate from accessed cache lines. A cache hit in either the prefetch way or the non-prefetch ways causes the cache line to be delivered to the requesting microprocessor in a cache hit fashion. Cache hit rates may be beneficially increased due to the presence of prefetch data in the cache, while the detrimental effects of cache pollution are avoided.
The cache is further configured to move prefetch cache lines from the prefetch way to the non-prefetch way if the prefetch cache lines are requested (i.e. they become accessed cache lines) . A variety of mechanisms are described herein. Instruction cache lines may be moved immediately upon access, while data cache line accesses may be counted and a number of accesses greater than a predetermined threshold value may occur prior to moving the data cache line from the prefetch way to the non-prefetch way. Treating data and instruction cache lines differently may further avoid the effects of cache pollution by not moving infrequently accessed data cache lines into the non-prefetch way. Additionally, movement of an accessed cache line from the prefetch way to the non-prefetch way may be delayed until the accessed cache line is to be replaced by a prefetch cache line. Advantageously, the number of accessed cache lines stored in the cache may be temporarily increased when a prefetch cache line becomes an accessed cache line.
By providing a prefetch way within the cache for prefetch cache lines, the cache described herein uses the same channel for returning a cache hit of prefetch data to the requesting processor as is used for returning a cache hit of previously accessed data. Using the same channel may engender cost savings over implementations which employ a special channel for prefetch data return.
Broadly speaking, the present invention contemplates a cache comprising a storage coupled to a control unit. The storage includes at least a first way for storing cache lines and at least one prefetch way for storing prefetch cache lines. The control unit is co

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Cache including a prefetch way for storing cache lines and... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Cache including a prefetch way for storing cache lines and..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Cache including a prefetch way for storing cache lines and... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2549935

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.