Microprocessor circuits, systems, and methods implementing a...

Electrical computers and digital processing systems: processing – Instruction fetching – Prefetching

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C712S237000

Reexamination Certificate

active

06216219

ABSTRACT:

BACKGROUND OF THE INVENTION
The present embodiments relate to microprocessor technology, and are more particularly directed to microprocessor circuits, systems, and methods implementing a load target buffer with entries relating to prefetch desirability.
Microprocessor technology continues to advance at a rapid pace, with consideration given to all aspects of design. Designers constantly strive to increase performance, while maximizing efficiency. With respect to performance, greater overall microprocessor speed is achieved by improving the speed of various related and unrelated microprocessor circuits and operations. For example, one area in which operational efficiency is improved is by providing parallel and out-of-order instruction execution. As another example, operational efficiency also is improved by providing faster and greater access to information, with such information including instructions and/or data. The present embodiments are primarily directed at this access capability and, more particularly, to improving access to data by way of prefetching such data in response to either data load or data store operations.
One very common approach in modern computer systems directed at improving access time to information is to include one or more levels of cache memory within the system. For example, a cache memory may be formed directly on a microprocessor, and/or a microprocessor may have access to an external cache memory. Typically, the lowest level cache (i.e., the first to be accessed) is smaller and faster than the cache or caches above it in the hierarchy, and the number of caches in a given memory hierarchy may vary. In any event, when utilizing the cache hierarchy, when an information address is issued, the address is typically directed to the lowest level cache to see if that cache stores information corresponding to that address, that is, whether there is a “hit” in that cache. If a hit occurs, then the addressed information is retrieved from the cache without having to access a memory higher in the memory hierarchy, where that higher ordered memory is likely slower to access than the hit cache memory. On the other hand, if a cache hit does not occur, then it is said that a cache miss occurs. In response, the next higher ordered memory structure is then presented with the address at issue. If this next higher ordered memory structure is another cache, then once again a hit or miss may occur. If misses occur at each cache, then eventually the process reaches the highest ordered memory structure in the system, at which point the addressed information may be retrieved from that memory.
Given the existence of cache systems, another prior art technique for increasing speed involves the prefetching of information in combination with cache systems. Prefetching involves a speculative retrieval, or preparation to retrieve, information, where the information is retrieved from a higher level memory system, such as an external memory, into a cache under the expectation that the retrieved information may be needed by the microprocessor for an anticipated event at some point after the next successive clock cycle. In this regard, the instance of a load is perhaps more often thought of in connection with retrieval, but note that prefetching may also concern a data store as well. More specifically, a load occurs where a specific data is retrieved so that the retrieved data may be used by the microprocessor. However, a store operation often first retrieves a group of data, where a part of that group will be overwritten. Still further, some store operations, such as a store interrogate, do not actually retrieve data, but prepare some resource external from the microprocessor for an upcoming event which will store information to that resource. Each of these cases, for purposes of this Background and the present embodiments to follow, should be considered a type of prefetch. In any event, in the case of prefetching where data is speculatively retrieved into an on-chip cache, if the anticipated event giving rise to the prefetch actually occurs, the prefetched information is already available in the cache and, therefore, may be fetched from the cache without having to seek it from a higher ordered memory system. In other words, prefetching lowers the risk of a cache miss once an actual fetch is necessary.
Given the above techniques, the present inventors provide within a microprocessor a load target buffer (“LTB”) which in certain embodiments predicts the address of the data to be used as the address for a prefetch, and in still further embodiments includes entries of different lengths based on either prefetch desirability, and further in some instances based on data fetch pattern behavior. Thus, below are presented various embodiments which address various prior art considerations and still further aspects as ascertainable by a person skilled in the art.
BRIEF SUMMARY OF THE INVENTION
In one embodiment there is a microprocessor comprising a memory system for outputting data in response to an address, wherein the memory system is further operable to receive a prefetch request having a predicted target data address. The microprocessor further includes a load target circuit, which comprises a first plurality of entries of a first length and a second plurality of entries of a second length. Each of the first plurality of entries comprises a value for corresponding the entry to a corresponding first plurality of data fetching instructions. Further, each of the first plurality of entries further comprises a value for indicating a corresponding predicted target data address. Each of the second plurality of entries also comprises a value for corresponding each of the second plurality of entries to a corresponding second plurality of data fetching instructions. However, each of the second plurality of data fetching instructions is of a type for which it is undesirable to issue a prefetch request. Other circuits, systems, and methods are also disclosed and claimed.


REFERENCES:
patent: 5377336 (1994-12-01), Eickemeyer et al.
patent: 5748976 (1998-05-01), Taylor

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Microprocessor circuits, systems, and methods implementing a... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Microprocessor circuits, systems, and methods implementing a..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Microprocessor circuits, systems, and methods implementing a... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2553321

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.