Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories
Reexamination Certificate
2001-02-23
2004-11-16
Kim, Matthew (Department: 2186)
Electrical computers and digital processing systems: memory
Storage accessing and control
Hierarchical memories
C711S213000
Reexamination Certificate
active
06820173
ABSTRACT:
FIELD OF THE INVENTION
The present invention relates to accessing memory, and more particularly to reducing latency while accessing memory.
BACKGROUND OF THE INVENTION
Prior art
FIG. 1
illustrates one exemplary prior art architecture that relies on conventional techniques of accessing information in memory. As shown, a processor
102
is provided which is coupled to a Northbridge
104
via a system bus
106
. The Northbridge
104
is in turn coupled to dynamic random access memory (DRAM)
108
. In use, the processor
102
sends requests to the Northbridge
104
for information stored in the DRAM
108
. In response to such requests, the Northbridge
104
retrieves information from the DRAM
108
for delivering the same to the processor
102
via the system bus
106
. Such process of calling and waiting for the retrieval of information from the DRAM
108
often causes latency in the performance of operations by the processor
102
. One solution to such latency involves the utilization of high-speed cache memory
110
on the Northbridge
104
or the processor
102
for storing instructions and/or data.
Cache memory has long been used in data processing systems to improve the performance thereof. A cache memory is a relatively high speed, relatively small memory in which active portions of program instructions and/or data are placed. The cache memory is typically faster than main memory by a factor of up to ten or more, and typically approaches the speed of the processor itself. By keeping the most frequently accessed and/or predicted information in the high-speed cache memory, the average memory access time approaches the access time of the cache.
The need for cache memory continues even as the speed and density of microelectronic devices improve. In particular, as microelectronic technology improves, processors are becoming faster. Every new generation of processors is about twice as fast as the previous generation, due to the shrinking features of integrated circuits. Unfortunately, memory speed has not increased concurrently with microprocessor speed. DRAM technology rides the same technological curve as microprocessors, technological improvements yield denser DRAMs, but not substantially faster DRAMs. Thus, while microprocessor performance has improved by a factor of about one thousand in the last ten to fifteen years, DRAM speeds have improved by only 50%. Accordingly, there is currently about a twenty-fold gap between the speed of present day microprocessors and DRAM. In the future this speed discrepancy between the processor and memory will likely increase.
Caching reduces this large speed discrepancy between processor and memory cycle times by using a fast static memory buffer to hold a small portion of the instructions and/or data that are currently being used. When the processor needs a new instruction and/or data, it first looks in the cache. If the instruction and/or data is in the cache (referred to a cache “hit”), the processor can obtain the instruction and/or data quickly and proceed with the computation. If the instruction and/or data is not in the cache (referred to a cache “miss”), the processor must wait for the instruction and/or data to be loaded from main memory.
Cache performance relies on the phenomena of “locality of reference”. The locality of reference phenomena recognizes that most computer program processing proceeds in a sequential fashion with multiple loops, and with the processor repeatedly accessing a set of instructions and/or data in a localized area of memory. In view of the phenomena of locality of reference, a small, high speed cache memory may be provided for storing data blocks containing data and/or instructions from main memory which are presently being processed. Although the cache is only a small fraction of the size of main memory, a large fraction of memory requests will locate data or instructions in the cache memory, because of the locality of reference property of programs.
Unfortunately, many programs do not exhibit sufficient locality of reference to benefit significantly from conventional caching. For example, many large scale applications, such as scientific computing, Computer-Aided Design (CAD) applications and simulation, typically exhibit poor locality of reference and therefore suffer from high cache miss rates. These applications therefore tend to run at substantially lower speed than the processor's peak performance.
In an attempt to improve the performance of a cache, notwithstanding poor locality of reference, “predictive” caching has been used. In predictive caching, an attempt is made to predict where a next memory access will occur, and the potential data block of memory is preloaded into the cache. This operation is also referred to as “prefetching”. In one prior art embodiment prefetching includes retrieving serially increasing addresses from a current instruction. Serial prefetchers such as this are commonly used in a number of devices where there is a single data stream with such serially increasing addresses.
Unfortunately, predictive caching schemes may often perform poorly because of the difficulty in predicting where a next memory access will occur. Performance may be degraded for two reasons. First, the predicting system may inaccurately predict where a next memory access will occur, so that incorrect data blocks of memory are prefetched. Prefetching mechanisms are frequently defeated by the existence of multiple streams of data. Moreover, the prediction computation itself may be so computationally intensive as to degrade overall system response.
One predictive caching scheme attempts to dynamically detect “strides” in a program in order to predict a future memory access. See, for example, International Patent Application WO 93/18459 to Krishnamohan et al. entitled “Prefetching Into a Cache to Minimize Main Memory Access Time and Cache Size in a Computer System” and Eickemeyer et al. “A Load Instruction Unit for Pipeline Processors”,
IBM Journal of Research and Development
, Vol. 37, No. 4, July 1993, pp. 547-564. Unfortunately, as described above, prediction based on program strides may only be accurate for highly regular programs. Moreover, the need to calculate a program stride during program execution may itself decrease the speed of the caching system.
Another attempt at predictive caching is described in U.S. Pat. No. 5,305,389 to Palmer entitled “Predictive Cache System”. In this system, prefetches to a cache memory subsystem are made from predictions which are based on access patterns stored by context. An access pattern is generated from prior accesses of a data processing system processing in a like context. During a training sequence, an actual trace of memory accesses is processed to generate unit patterns which serve in making future predictions and to identify statistics such as pattern accuracy for each unit pattern. Again, it may be difficult to accurately predict performance for large scale applications. Moreover, the need to provide training sequences may require excessive overhead for the system.
DISCLOSURE OF THE INVENTION
A system, method and article of manufacture are provided for retrieving information from memory. Initially, processor requests for information from a first memory are monitored. A future processor request for information is then predicted based on the previous step. Thereafter, one or more speculative requests are issued for retrieving information from the first memory in accordance with the prediction. The retrieved information is subsequently cached in a second memory for being retrieved in response to processor requests without accessing the first memory. By allowing multiple speculative requests to be issued, throughput of information in memory is maximized.
In one embodiment of the present invention, a total number of the prediction and/or processor requests may be determined. As such, the speculative requests may be conditionally issued if the total number of the requests exceeds a predetermined amount. As an option, the speculative requests may be con
Bittel Donald A.
Case Colyn S.
Choi Woo H.
Kim Matthew
NVIDIA Corporation
Zilka-Kotab, PC
LandOfFree
Data prefetcher with predictor capabilities does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Data prefetcher with predictor capabilities, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Data prefetcher with predictor capabilities will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3302638