Method and apparatus for prefetching data into cache

Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C711S137000

Reexamination Certificate

active

06643745

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates generally to the field of processors, and specifically, to a method and micro-architectural apparatus for prefetching data into cache.
2. Background Information
The use of a cache memory with a processor is well known in the computer art. A primary purpose of utilizing cache memory is to bring the data closer to the processor in order for the processor to operate on that data. It is generally understood that memory devices closer to the processor operate faster than memory devices farther away on the data path from the processor. However, there is a cost trade-off in utilizing faster memory devices. The faster the data access, the higher the cost to store a bit of data. Accordingly, a cache memory tends to be much smaller in storage capacity than main memory, but is faster in accessing the data.
A computer system may utilize one or more levels of cache memory. Allocation and de-allocation schemes implemented for the cache for various known computer systems are generally similar in practice. That is, data that is required by the processor is cached in the cache memory (or memories). If a cache miss occurs, then an allocation is made at the entry indexed by the access. The access can be for loading data to the processor or storing data from the processor to memory. The cached information is retained by the cache memory until it is no longer needed, made invalid or replaced by other data, in which instances the cache entry is de-allocated.
In a computer system having multiple levels of cache, the processor typically checks in a next lower level (e.g., a second level) cache for data on a load “miss” to a higher level (e.g., a first level) cache. If the data is not in the lowest level cache, then the data is retrieved from external memory. This “daisy-chain” or “serial” data lookup mechanism decreases system performance (by wasting clock cycles) if it is known or there is a high likelihood that the data is not in the lower level(s) of the cache.
Accordingly, there is a need in the technology for a method and apparatus to allow the flexibility to retrieve data from external memory and bypass the second level cache upon first level cache “miss”.
It is further desirable to provide a method and apparatus to place the data in a first level cache while prefetching data exclusively into a second level cache, based on external conditions.
SUMMARY OF THE INVENTION
In one embodiment, the present invention is a computer system. The computer system includes a higher level cache, a lower level cache, a decoder to decode instructions, and a circuit coupled to the decoder. In one embodiment, the circuit, in response to a single decoded instruction, retrieves data from external memory and bypasses the lower level cache upon a higher level cache miss.


REFERENCES:
patent: 3909790 (1975-09-01), Shapiro et al.
patent: 5355467 (1994-10-01), MacWilliams et al.
patent: 5361391 (1994-11-01), Westberg
patent: 5732242 (1998-03-01), Mowry
patent: 5751996 (1998-05-01), Glew et al.
patent: 5758119 (1998-05-01), Mayfield et al.
patent: 5809320 (1998-09-01), Jain et al.
patent: 5829025 (1998-10-01), Mittal
21164 Alpha Microprocessor Data Sheet, 1997 Samsung electronics, p. 67.*
T. C. Mowry, “Tolerating Latency Through Software-Controlled Data Prefetching,” Ph.D. thesis, Department of Electrical Engineering, Stanford University, Mar. 1994, pp. 90-91, and 121-193. [Online] http://suif.stanford.edu/papers/.*
21164 Alpha Microprocessor Data Sheet, 1997 Samsung Electronics, pp. 1, 50-51, 55-59, 63-77.
TM1000 Preliminary Data Book, (Tri Media), 1997, Philips Electronics .
Visual Instruction Set (VIS) User's Guide, Sun Microsystems, version 1.1, Mar. 1997, pp. 1-30, 41-127.
AMD-3D Technology manual, /Rev. B, Feb. 1998, pp. 1-58.
The UltraSPARC Processor—Technology White Paper The UltraSPARC Architecture, Sun Microsystems, Jul. 17, 1997, pp. 1-10.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Method and apparatus for prefetching data into cache does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Method and apparatus for prefetching data into cache, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and apparatus for prefetching data into cache will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3179951

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.