Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories
Reexamination Certificate
2000-02-21
2002-07-30
Gossage, Glenn (Department: 2187)
Electrical computers and digital processing systems: memory
Storage accessing and control
Hierarchical memories
C711S131000, C711S141000, C711S151000, C711S169000, C710S039000
Reexamination Certificate
active
06427189
ABSTRACT:
TECHNICAL FIELD
This application is related in general to cache memory subsystems, and in specific to on-chip caches with queuing structures and out-of-order caches.
BACKGROUND
Computer systems may employ a multi-level hierarchy of memory, with relatively fast, expensive but limited-capacity memory at the highest level of the hierarchy and proceeding to relatively slower, lower cost but higher-capacity memory at the lowest level of the hierarchy. The hierarchy may include a small fast memory called a cache, either physically integrated within a processor or mounted physically close to the processor for speed. The computer system may employ separate instruction caches and data caches. In addition, the computer system may use multiple levels of caches. The use of a cache is generally transparent to a computer program at the instruction level and can thus be added to a computer architecture without changing the instruction set or requiring modification to existing programs.
Computer processors typically include cache for storing data. When executing an instruction that requires access to memory (e.g., read from or write to memory), a processor typically accesses cache in an attempt to satisfy the instruction. Of course, it is desirable to have the cache implemented in a manner that allows the processor to access the cache in an efficient manner. That is, it is desirable to have the cache implemented in a manner such that the processor is capable of accessing the cache (i.e., reading from or writing to the cache) quickly so that the processor may be capable of executing instructions quickly. Caches have been configured in both on chip and off-chip arrangements. On-processor-chip caches have less latency, since they are closer to the processor, but since on-chip area is expensive, such caches are typically smaller than off-chip caches. Off-processor-chip caches have longer latencies since they are remotely located from the processor, but such caches are typically larger than on-chip caches.
A prior art solution has been to have multiple caches, some small and some large. Typically, the smaller caches would be located on-chip, and the larger caches would be located off-chip. Typically, in multi-level cache designs, the first level of cache (i.e., L0) is first accessed to determine whether a true cache hit for a memory access request is achieved. If a true cache hit is not achieved for the first level of cache, then a determination is made for the second level of cache (i.e., L1), and so on, until the memory access request is satisfied by a level of cache. If the requested address is not found in any of the cache levels, the processor then sends a request to the system's main memory in an attempt to satisfy the request. In many processor designs, the time required to access an item for a true cache hit is one of the primary limiters for the clock rate of the processor if the designer is seeking a single-cycle cache access time. In other designs, the cache access time may be multiple cycles, but the performance of a processor can be improved in most cases when the cache access time in cycles is reduced. Therefore, optimization of access time for cache hits is critical for the performance of the computer system.
Prior art cache designs for computer processors typically require “control data” or tags to be available before a cache data access begins. The tags indicate whether a desired address (i.e., an address required for a memory access request) is contained within the cache. Accordingly, prior art caches are typically implemented in a serial fashion, wherein upon the cache receiving a memory access request, a tag is obtained for the request, and thereafter if the tag indicates that the desired address is contained within the cache, the cache's data array is accessed to satisfy the memory access request. Thus, prior art cache designs typically generate tags indicating whether a true cache “hit” has been achieved for a level of cache, and only after a true cache hit has been achieved is the cache data actually accessed to satisfy the memory access request. A true cache “hit” occurs when a processor requests an item from a cache and the item is actually present in the cache. A cache “miss” occurs when a processor requests an item from a cache and the item is not present in the cache. The tag data indicating whether a “true” cache hit has been achieved for a level of cache typically comprises a tag match signal. The tag match signal indicates whether a match was made for a requested address in the tags of a cache level. However, such a tag match signal alone does not indicate whether a true cache hit has been achieved.
As an example, in a multi-processor system, a tag match may be achieved for a cache level, but the particular cache line for which the match was achieved may be invalid. For instance, the particular cache line may be invalid because another processor has snooped out that particular cache line. As used herein a “snoop” is an inquiry from a first processor to a second processor as to whether a particular cache address is found within the second processor. Accordingly, in multi-processor systems a MESI signal is also typically utilized to indicate whether a line in cache is “Modified, Exclusive, Shared, or Invalid.” Therefore, the control data that indicates whether a true cache hit has been achieved for a level of cache typically comprises a MESI signal, as well as the tag match signal. Only if a tag match is found for a level of cache and the MESI protocol indicates that such tag match is valid, does the control data indicate that a true cache hit has been achieved. In view of the above, in prior art cache designs, a determination is first made as to whether a tag match is found for a level of cache, and then a determination is made as to whether the MESI protocol indicates that a tag match is valid. Thereafter, if a determination has been made that a true tag hit has been achieved, access begins to the actual cache data requested.
An example of a prior art, multi-level cache design is shown in FIG.
4
. The exemplary cache design of
FIG. 4
has a three-level cache hierarchy, with the first level referred to as L0, the second level referred to as L1, and the third level referred to as L2. Accordingly, as used herein L0 refers to the first-level cache, L1 refers to the second-level cache, L2 refers to the third-level cache, and so on. It should be understood that prior art implementations of multi-level cache design may include more than three levels of cache, and prior art implementations having any number of cache levels are typically implemented in a serial manner as illustrated in FIG.
4
. As discussed more fully hereafter, multi-level caches of the prior art are generally designed such that a processor accesses each level of cache in series until the desired address is found. For example, when an instruction requires access to an address, the processor typically accesses the first-level cache L0 to try to satisfy the address request (i.e., to try to locate the desired address). If the address is not found in L0, the processor then accesses the second-level cache L1 to try to satisfy the address request. If the address is not found in L1, the processor proceeds to access each successive level of cache in a serial manner until the requested address is found, and if the requested address is not found in any of the cache levels, the processor then sends a request to the system's main memory to try to satisfy the request.
Typically, when an instruction requires access to a particular address, a virtual address is provided from the processor to the cache system. As is well-known in the art, such virtual address typically contains an index field and a virtual page number field. The virtual address is input into a translation look-aside buffer (“TLB”)
510
for the L0 cache. The TLB
510
provides a translation from a virtual address to a physical address. The virtual address index field is input into the L0 tag memory array(s)
512
. As shown in
FIG. 4
, th
Grutkowski Tom
Mulla Dean A.
Riedlinger Reid James
Gossage Glenn
Hewlett--Packard Company
LandOfFree
Multiple issue algorithm with over subscription avoidance... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Multiple issue algorithm with over subscription avoidance..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Multiple issue algorithm with over subscription avoidance... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2819886