Method and system for predicting addresses and prefetching data

Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories

Patent

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

711231, 711217, 711218, G06F 1208

Patent

active

058025669

DESCRIPTION:

BRIEF SUMMARY
BACKGROUND OF THE INVENTION

1. Field of The Invention
This invention relates to a method for increasing the speed of processing data in a computer system.
2. Description of the Related Art
In latter years, progress in VLSI (Very Large Scale Integrated) circuits ha s widened the gap in access times between microprocessors and memory device s. The memory devices are relatively slow compared with the rapid microprocessors in order to narrow this gap in speed, caches have been introduced. These caches are installed between microprocessor and memory device. Caches are relatively small and quick memory devices in the form of chips. In one cache, for example, data that is often used by the microprocessor is stored. The cache fetches its data from a larger memory device, which is slow compared to the cache and the microprocessor. Sometimes, two or several caches are arranged hierarchically between a microprocessor and a large memory device.
Caches can also be found in multiprocessor systems, e.g., where each microprocessor is connected to a cache and where information can be stored into and retrieved from a large memory device by each cache.
An example of the use of this technique is a multiprocessor system in which different processors wore to execute separate sections of a program and in which they therefore must fetch different data from the memory device. When a processor has completed one execution and is about to start a new one, it only needs a fraction of the data stored in the memory device. A processor in this situation first requests the cache for the first piece of data. If the cache does not have this piece of data, it fetches the data from the memory device and stores it in cache. As the processor requests data that is not stored in cache, the data contents of the cache increase. Since the processor is only executing a specific section of the program, the data sought by the processor will be found in cache more often as the execution advances, since the cache has already fetched these data upon previous requests from the processor. The access time for fetching a piece of data from the cache falls considerably short of the access time for the processor to fetch a piece of data straight from the large memory device. The speed of data between memory and processor is thus increased, decreasing the gap between memory-device speed and processor speed, which in turn increases data-processing speed.
Attempts have been made to further increase memory speed with a cache that, when fetching data at an address in the memory device, would simultaneously fetch data at a nearby address in anticipation of the latter being requested by the processor, in which case that piece of data would already be in cache. Another possibility would be to fetch an entire block of data when a single piece is requested. This is advantageous if the data is stored in blocks allowing one to assume that, if the processor requests one address in a block, it will probably request several addresses in the same block. This means that large amounts of unusable data are fetched, however, which is why required caches increase in terms of necessary memory capacity, thus decreasing memory speed.


SUMMARY OF THE INVENTION

The present invention provides a method in which the cache fetches data before the microprocessor requests it, but in which the cache is nonetheless small and thus very quick because the probability is considerably greater that prefetched data will be requested by the processor than it is in other known systems that prefetch data.
The present invention thus relates to a method to increase data-processing speed in computer systems containing at least one microprocessor and a memory device plus a cache connected to the processor, in which the cache is structured to fetch data from the addresses in the memory device that the processor requests and thus also fetch data from one or several addresses in the memory device that the processor has not requested, and is characterized by: a circuit called the stream-detection circuit conn

REFERENCES:
patent: 4262332 (1981-04-01), Bass et al.
patent: 4468730 (1984-08-01), Dodd et al.
patent: 5093777 (1992-03-01), Ryan
patent: 5226130 (1993-07-01), Favor et al.
patent: 5305389 (1994-04-01), Palmer
patent: 5357618 (1994-10-01), Mirza et al.
patent: 5367656 (1994-11-01), Ryan
patent: 5426764 (1995-06-01), Ryan

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Method and system for predicting addresses and prefetching data does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Method and system for predicting addresses and prefetching data , we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and system for predicting addresses and prefetching data will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-284526

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.