Electrical computers and digital processing systems: memory – Storage accessing and control – Hierarchical memories
Reexamination Certificate
2000-05-04
2002-07-02
Yoo, Do Hyun (Department: 2187)
Electrical computers and digital processing systems: memory
Storage accessing and control
Hierarchical memories
C711S130000, C711S137000, C712S207000, C717S152000
Reexamination Certificate
active
06415356
ABSTRACT:
BACKGROUND
1. Field of the Invention
The present invention relates to techniques for improving computer system performance. More specifically, the present invention relates to a method and an apparatus for prefetching data and/or instructions from memory by using an assist processor that executes in advance of a primary processor.
2. Related Art
As increasing semiconductor integration densities allow more transistors to be integrated onto a microprocessor chip, computer designers are investigating different methods of using these transistors to increase computer system performance. To this end, computer designers are beginning to incorporate multiple central processing units (CPUs) into a single semiconductor chip. This can result in performance gains for computational tasks that can be parallelized (divided) into separate pieces that can be concurrently executed.
Unfortunately, performance gains from parallelization can be limited for many applications that contain inherently serial portions of code. For these inherently serial portions of code, performance is further limited by memory latency problems.
Memory latency problems are growing progressively worse as processor clock speeds continue to improve at an exponential rate. At today's processor clock speeds, it can take as many as 100 processor clock cycles to pull a cache line in from main memory.
Computer designers presently use a number of techniques to decrease memory latency delays. (1) Out-of-order execution can be used to schedule loads and stores so that memory latency is hidden as much as possible. Unfortunately, out-of-order execution is typically limited to hiding a few clock cycles of memory latency. (2) A non-faulting load instruction can be used to speculatively load a data value, without causing a miss when the address is not valid. (3) A steering load instruction can be used to speculatively load a data value into L
2
cache, but not L
1
cache, so that L
1
cache is not polluted by unused data values. Unfortunately, using non-faulting loads and steering loads can result in unnecessary loads. This wastes instruction cache space and ties up registers. (4) Some researchers have investigated using hardware prefetch engines, but these hardware prefetch engines are typically ineffective for irregular memory access patterns.
What is needed is a method and an apparatus that reduces memory latency delays in fast processor systems without the limitations and costs involved in using the above-discussed techniques.
SUMMARY
One embodiment of the present invention provides a system that prefetches from memory by using an assist processor that executes in advance of a primary processor. The system operates by executing executable code on the primary processor, and simultaneously executing a reduced version of the executable code on the assist processor. This reduced version runs more quickly than the executable code, and generates the same pattern of memory references as the executable code. This allows the assist processor to generate the same pattern of memory references that the primary processor generates in advance of when the primary processor generates the memory references. The system stores results of memory references generated by the assist processor in a store that is shared with the primary processor so that the primary processor can access the results of the memory references. In one embodiment of the present invention, this store is a cache memory.
In one embodiment of the present invention, prior to executing the executable code, the system compiles source code into the executable code for the primary processor. The system also produces the reduced version of the executable code for the assist processor from the executable code for the primary processor by eliminating instructions from the executable code that have no effect on a pattern of memory references generated by the executable code.
In one embodiment of the present invention, producing the reduced version of the executable code involves converting load instructions into corresponding tore instructions, eliminating redundant load instructions directed to previously loaded cache lines, and eliminating code that is used to calculate store values that are not used in determining address reference patterns.
In one embodiment of the present invention, the system profiles the executable code to create instruction traces for hot spots in the executable code, and then filters the instruction traces to produce the reduced version of the executable code.
In one embodiment of the present invention, the processes of compiling the source code and producing the reduced version of the executable code are carried out by a compiler.
In one embodiment of the present invention, the system periodically sends progress indicators from the primary processor to the assist processor through a one-way communication channel. In a variation on this embodiment, the system stops execution of the assist processor if the assist processor is less than a minimum number of instructions ahead of the primary processor.
In one embodiment of the present invention, the reduced version of the executable code is modified to speculatively execute code down a branch path that is more frequently taken if the reduced version of the executable code is determined to be not significantly faster than the executable code.
In one embodiment of the present invention, the store includes a data cache that is shared by the primary processor and the assist processor.
In one embodiment of the present invention, the store includes an instruction cache that is shared by the primary processor and the assist processor.
In one embodiment of the present invention, the store includes a branch history table that is shared by the primary processor and the assist processor.
In one embodiment of the present invention, the primary processor and the assist processor reside on the same semiconductor chip.
In one embodiment of the present invention, the primary processor and the assist processor reside on distinct semiconductor chips.
REFERENCES:
patent: 4991080 (1991-02-01), Emma et al.
patent: 5392391 (1995-02-01), Caulk, Jr.et al.
patent: 5787285 (1998-07-01), Lanning
patent: 96/20440 (1996-04-01), None
patent: WO 96/20440 (1996-07-01), None
Publication entitled, “Storage Hierarchy Control System” by Sakalay. IBM Technical Disclosure Bullentin, vol. 15, No. 4, Sep. 1972, pp. 1100-1101; XP 002002415; G06F 12/08.
Publication entitled, “Code Generation for Streaming: an Access/Execute Mechanism” by Manuel E. Benitez and Jack W. Davidson of the Department of Computer Science at the University of Virginia Charlottesville, VA 22903; 8345 Computer Architecture News 19 (1991) Apr., No. 2, New York, USA, pp. 132-141.
Chaudhry Shailender
Tremblay Marc
Encarnación Yamir
Yoo Do Hyun
LandOfFree
Method and apparatus for using an assist processor to... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Method and apparatus for using an assist processor to..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and apparatus for using an assist processor to... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2840017