Method and apparatus for preventing cache pollution in...

Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S138000, C711S140000, C711S154000, C712S235000

Reexamination Certificate

active

06725338

ABSTRACT:

BACKGROUND OF INVENTION
Generally, a microprocessor operates much faster than main memory can supply data to the microprocessor. Therefore, many computer systems temporarily store recently and frequently used data in smaller, but much faster cache memory. Cache memory may reside directly on the microprocessor chip (L1 cache) or may be external to the microprocessor (L2 cache).
Referring to
FIG. 1
, a typical computer system includes a microprocessor (also referred to herein and known as “processor”) (
10
) having, among other things, a CPU (
12
), a load/store unit (
14
), an on-board cache memory (
16
), and a load miss buffer (LMB) (
17
) for holding loads that miss in the on-board cache while they are accessing main memory. The microprocessor (
10
) is connected to a main memory (
18
) that holds data and program instructions to be executed by the microprocessor (
10
). Internally, the execution of program instructions are carried out by the CPU (
12
). Data needed by the CPU (
12
) to carry out an instruction are fetched by the load/store unit (
14
). Upon command from the CPU (
12
), the load/store unit (
14
) searches for the data first in the cache memory (
16
), then in main memory (
18
). Finding the data in cache memory is referred to as a “hit.” Not finding the data in cache memory is referred to as a “miss.” Loads that miss in the cache are moved to the Load Miss Buffer (LMB) from which they are picked to access memory, wait for the return of data, and then commit the data into the caches.
Speculation is the use of data before it is known whether the data is really correct. Microprocessors use speculation to speed up the rate of computation. For instance, a microprocessor can speculate on the outcome of an operation that will provide data to calculate a load address. In doing this, the processor can dispatch the load access earlier and, if the speculation was correct, complete the load operation sooner. If the speculation was incorrect, the load is reissued when the correct value is known.
Speculative address loads are loads that are issued with an effective address generated from speculative data. For example, a consumer load is speculatively issued before its producer has validated its calculation. In this situation, the consumer load uses speculative data to form an effective address and then proceeds to execute and access the cache. If it turns out that the speculation was incorrect, the access to the cache may have altered the cache. This can occur if the speculative load access misses in the cache which causes the processor to bring in data relating to the incorrect effective address. Bringing in this data may replace other data in the cache that will be accessed in the near future. So when the processor goes to access the replaced data, it will no longer be in the cache and will incur a cache miss which would not have occurred without the speculation. This is referred to as “cache pollution.”
SUMMARY OF INVENTION
In general, in one aspect, the present invention involves a method of optimizing speculative address load processing by a microprocessor comprising identifying a speculative load, marking the speculative load, determining whether a miss occurs for the speculative load, and preventing use of the speculative load if a miss occurs.
In general, in one aspect, the present invention involves a method of optimizing speculative address load processing by a microprocessor comprising identifying a speculative load, marking the speculative load, inserting the load into a load miss queue, determining whether a miss occurs for the speculative load, and preventing the load miss queue from committing the speculative load to cache if a miss occurs.
In general, in one aspect, the present invention involves a microprocessor designer for optimized speculative address load processing by a microprocessor, the system comprising a program stored on computer-readable media for identifying a speculative load, marking the speculative load, determining whether a miss occurs for the speculative load, and preventing use of the speculative load if a miss occurs.
In general, in one aspect, the present invention involves a microprocessor designed for optimized speculative address load processing by a microprocessor, the system comprising a program stored on computer-readable media for identifying a speculative load, marking the speculative load, inserting the load into a load miss queue, determining whether a miss occurs for the speculative load, and preventing the load miss queue from committing the speculative load to cache if a miss occurs.
In general, in one aspect, the present invention involves a system for optimizing speculative address load processing by a microprocessor comprising means for identifying a speculative load, means for marking the speculative load, means for determining whether a miss occurs for the speculative load, and means for preventing use of the speculative load if a miss occurs.
In general, in one aspect, the present invention involves a system for optimizing speculative address load processing by a microprocessor comprising means for identifying a speculative load, means for marking the speculative load, means for inserting the load into a load miss queue, means for determining whether a miss occurs for the speculative load, and means for preventing the load miss queue from committing the speculative load to cache if a miss occurs.
In general, in one aspect, the present invention involves a computer for speculative address load processing comprising a microprocessor in communication with a main memory; the microprocessor comprising a central processing unit for identifying a speculative load; marking the speculative load; determining whether a miss occurs for the speculative load; and preventing use of the speculative load if a miss occurs.
Other aspects and advantages of the invention will be apparent from the following description and the appended claims.


REFERENCES:
patent: 5420991 (1995-05-01), Konigsfeld et al.
patent: 5826109 (1998-10-01), Abramson et al.
patent: 5870599 (1999-02-01), Hinton et al.
patent: 6098166 (2000-08-01), Leibholz et al.
patent: 6418516 (2002-07-01), Arimilli et al.
patent: 6438656 (2002-08-01), Arimilli et al.
patent: 6473833 (2002-10-01), Arimilli et al.
patent: 6487637 (2002-11-01), Arimilli et al.
patent: 6516462 (2003-02-01), Okunev et al.
patent: 6542988 (2003-04-01), Tremblay et al.
Gupta et al., “Improving Instruction Behavior by reducing Cache Pollution”, ©1990 IEEE, pp. 82-91.*
Hwang et al., “An X86 Load/store Unit with Aggressive Scheduling of Load/store Operations”, ©1998IEEE, pp. 1-8.*
Reinman et al., “Predictive Techniques for Aggressive Load Speculation”, ©1998 IEEE, pp. 1-11.*
Franklin et al., “ARB: A Hardware Mechanism of Dynamic Reordering of Memory Reference”, ©1996 IEEE, pp. 552-571.*
Farkas et al., How Useful are Non-blocking Loads, Stream Buffers and Speculative Execution in Multiple Issue Processors?, ©1995 IEEE, pp. 78-89.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Method and apparatus for preventing cache pollution in... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Method and apparatus for preventing cache pollution in..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and apparatus for preventing cache pollution in... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3263774

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.