Electrical computers and digital processing systems: memory – Address formation – Address mapping
Reexamination Certificate
1997-12-29
2001-02-27
Yoo, Do (Department: 2185)
Electrical computers and digital processing systems: memory
Address formation
Address mapping
C711S137000, C711S213000, C712S207000
Reexamination Certificate
active
06195735
ABSTRACT:
CROSS-REFERENCES TO RELATED APPLICATIONS
Not Applicable.
STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH OR DEVELOPMENT
Not Applicable.
BACKGROUND OF THE INVENTION
Microprocessor technology continues to advance at a rapid pace, with consideration given to all aspects of design. Designers constantly strive to increase performance, while maximizing efficiency. With respect to performance, greater overall microprocessor speed is achieved by improving the speed of various related and unrelated microprocessor circuits and operations. For example, one area in which operational efficiency is improved is by providing parallel and out-of-order instruction execution. As another example, operational efficiency also is improved by providing faster and greater access to information, with such information including instructions and/or data. The present embodiments are primarily directed at this access capability.
One very common approach in modern computer systems directed at improving access time to information is to include one or more levels of cache memory within the system. For example, a cache memory may be formed directly on a microprocessor, and/or a microprocessor may have access to an external cache memory. Typically, the lowest level cache (i.e., the first to be accessed) is smaller and faster than the cache or caches above it in the hierarchy, and the number of caches in a given memory hierarchy may vary. In any event, when utilizing the cache hierarchy, when an information address is issued, the address is typically directed to the lowest level cache to see if that cache stores information corresponding to that address, that is, whether there is a “hit” in that cache. If a hit occurs, then the addressed information is retrieved from the cache without having to access a memory higher in the memory hierarchy, where that higher ordered memory is likely slower to access than the hit cache memory. On the other hand, if a cache hit does not occur, then it is said that a cache miss occurs. In response, the next higher ordered memory structure is then presented with the address at issue. If this next higher ordered memory structure is another cache, then once again a hit or miss may occur. If misses occur at each cache, then eventually the process reaches the highest ordered memory structure in the system, at which point the addressed information may be retrieved from that memory.
Given the existence of cache systems, another prior art technique for increasing speed involves the prefetching of information in combination with cache systems. Prefetching involves a speculative retrieval, or preparation to retrieve, information, where the information is retrieved from a higher level memory system, such as an external memory, into a cache under the expectation that the retrieved information may be needed by the microprocessor for an anticipated event at some point after the next successive clock cycle. In this regard, the instance of a load is perhaps more often thought of in connection with retrieval, but note that prefetching may also concern a data store as well. More specifically, a load occurs where a specific data is retrieved so that the retrieved data may be used by the microprocessor. However, a store operation often first retrieves a group of data, where a part of that group will be overwritten. Still further, some store operations, such as a store interrogate, do not actually retrieve data, but prepare some resource external from the microprocessor for an upcoming event which will store information to that resource. Each of these cases, for purposes of this Background and the present embodiments to follow, should be considered a type of prefetch. In any event, in the case of prefetching where data is speculatively retrieved into an on-chip cache, if the anticipated event giving rise to the prefetch actually occurs, the prefetched information is already available in the cache and, therefore, may be fetched from the cache without having to seek it from a higher ordered memory system. In other words, prefetching lowers the risk of a cache miss once an actual fetch is necessary.
Given the above techniques, the present inventors have further recognized additional complexities relating to cache and prefetching techniques, particularly as the frequency of prefetching activities advance. Thus, below are presented various embodiments which address these as well as other considerations ascertainable by a person skilled in the art.
BRIEF SUMMARY OF THE INVENTION
In one embodiment, there is a microprocessor comprising a cache circuit and circuitry for issuing a prefetch request. The prefetch request comprises an address and requests information of a first size from the cache circuit. The microprocessor also includes prefetch control circuitry, which comprises circuitry for receiving the prefetch request and evaluation circuitry for evaluating system parameters corresponding to the prefetch request. Additionally, the prefetch control circuitry comprises circuitry, responsive to the evaluation circuitry, for determining a size of information for a prefetch operation starting at the address from the cache circuit, where the prefetch operation corresponds to the prefetch request. Other circuits, systems, and methods are also disclosed and claimed.
REFERENCES:
patent: 3898624 (1975-08-01), Tobias
patent: 4881170 (1989-11-01), Morisada
patent: 5317727 (1994-05-01), Tsuchida et al.
patent: 5361391 (1994-11-01), Westberg
patent: 5544342 (1996-08-01), Dean
patent: 5659713 (1997-08-01), Goodwin et al.
patent: 5724613 (1998-03-01), Wszolek
patent: 5761464 (1998-06-01), Hopkins
patent: 5778423 (1998-07-01), Sites et al.
patent: 5778435 (1998-07-01), Berenbaum et al.
patent: 5796971 (1998-08-01), Emberson
patent: 5809529 (1998-09-01), Mayfield
patent: 5835967 (1998-11-01), McMahon
patent: 5838945 (1998-11-01), Emberson
patent: 5970508 (1999-10-01), Howe et al.
patent: 6058461 (2000-05-01), Lewchuk et al.
patent: 6085291 (2000-07-01), Hicks et al.
IBM Technical Disclosure Bulletin, Method for Accessing Non-Bursting Regions in Local Bus Address Space without Pre-Fetching, vol. No. 37, Issue No. 42, pp. 399-400, Apr. 1994.
Tabak, Advanced Microprocessor, Second Edition, pp. 159-161, Apr. 1994.
Tabak DAniel, “Advanced Microprocessor”, Second Edition, pp. 177-178, Dec. 1995.
Krueger Steven D.
Shiell Jonathan H.
Brady III W. James
Marshall, Jr. Robert D.
McLean Kimberly
Telecky , Jr. Frederick J.
Texas Instruments Incorporated
LandOfFree
Prefetch circuity for prefetching variable size data does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Prefetch circuity for prefetching variable size data, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Prefetch circuity for prefetching variable size data will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2616146