Method and system for fetching noncontiguous instructions in...

Electrical computers and digital processing systems: processing – Processing control – Branching

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C712S205000, C712S237000

Reexamination Certificate

active

06256727

ABSTRACT:

FIELD OF THE INVENTION
The present invention relates generally to a superscalar processor and more particularly to a system and method for fetching noncontiguous instructions in such a processor.
BACKGROUND OF THE INVENTION
Processors having this organization employ aggressive techniques to exploit instruction-level parallelism. Wide dispatch and issue paths place an upper bound on peak instruction throughput. Large issue buffers are used to maintain a window of instructions necessary for detecting parallelism, and a large pool of physical registers provides destinations for all of the in-flight instructions issued from the window. To enable concurrent execution of instructions, the execution engine is composed of many parallel functional units. The fetch engine speculates past multiple branches in order to supply a continuous instruction stream to the window.
The trend in superscalar design is to scale these techniques: wider dispatch/issue, larger windows, more physical registers, more functional units, and deeper speculation. To maintain this trend, it is important to balance all parts of the processor-any bottlenecks diminish the benefit of aggressive techniques.
Instruction fetch performance depends on a number of factors. Instruction cache hit rate and branch prediction accuracy have been long recognized as important problems in fetch performance and are well-researched areas.
Because of branches and jumps, instructions to be fetched during any given cycle may not be in contiguous cache locations. Hence, there must be adequate paths and logic available to fetch and align noncontiguous basic blocks and pass them down the pipelines. That is, it is not enough for the instructions to be present in the cache, it must also be possible to access them in parallel.
Modem microprocessors routinely use Branch History Tables and Branch Target Address Caches to improve their ability to efficiently fetch past branch instructions. Branch History Tables and other prediction mechanisms allow a processor to fetch beyond a branch instruction before the outcome of the branch is known. Branch Target Address Caches allow a processor to speculatively fetch beyond a branch before the branch's target address has been computed. Both of these techniques use run-time history to speculatively predict which instructions should be fetched and eliminate “dead” cycles that might normally be wasted. Even with these techniques, current microprocessors are limited to fetching only contiguous instructions during a single clock cycle.
As superscalar processors become more aggressive and attempt to execute many more instructions per cycle, they must also be able to fetch many more instructions per cycle. Frequent branch instructions can severely limit a processor's effective fetch bandwidth. Statistically, one of every four instructions is a branch instruction and over half of these branches are taken. A processor with a wide fetch bandwidth, say 8 contiguous instructions per cycle, could end up throwing away half of the instructions that it fetches as much as half of the time.
High performance superscalar processor organizations divide naturally into an instruction fetch mechanism and an instruction execution mechanism. The fetch and execution mechanisms are separated by instruction issue buffer(s), for example, queues, reservation stations, etc. Conceptually, the instruction fetch mechanism acts as a “producer” which fetches, decodes, and places instructions into the buffer. The instruction execution engine is the “consumer” which removes instructions from the buffer and executes them, subject to data dependence and resource constraints. Control dependences (branches and jumps) provide a feedback mechanism between the producer and consumer.
Previous designs use a conventional instruction cache, containing a static form of the program, to work with. Every cycle, instructions from noncontiguous locations must be fetched from the instruction cache and assembled into the predicted dynamic sequence. There are problems with this approach:
Pointers to all of the noncontiguous instruction blocks must be generated before fetching can begin. This implies a level of indirection, through some form of branch target table (branch target buffer, branch address cache, etc.), which translates into an additional pipeline stage before the instruction cache.
The instruction cache must support simultaneous access to multiple, noncontiguous cache lines. This forces the cache to be multiported: if multiporting is done through interleaving, bank conflicts are suffered.
After fetching the noncontiguous instructions from the cache, they must be assembled into the dynamic sequence. Instructions must be shifted and aligned to make them appear contiguous to the decoder. This most likely translates into an additional pipeline stage after the instruction cache.
A trace cache approach avoids these problems by caching dynamic sequences themselves, ready for the decoder. If the predicted dynamic sequence exists in the trace cache, it does not have to be recreated on the fly from the instruction cache's static representation. In particular, no additional stages before or after the instruction cache are needed for fetching noncontiguous instructions. The stages do exist, but not on the critical path of the fetch unit-rather, on the fill side of the trace cache. The cost of this approach is redundant instruction storage: the same instructions must reside in both the primary cache and the trace cache, and there even might be redundancy among lines in the trace cache. Accordingly, utilizing a trace cache approach several instructions are grouped together based upon a most likely path. They are then stored together in the trace cache. This system requires a complex mechanism to pack and cache instruction segments.
Accordingly, what is needed is a method and system for improving the overall throughput of a superscalar processor. More particularly, what is needed is a system and method for efficiently fetching noncontiguous instructions in such a processor. The present invention addresses such a need.
SUMMARY OF THE INVENTION
A method and system for obtaining non-contiguous blocks of instruction in a data processing system is disclosed. In a first aspect, a system for fetching noncontiguous blocks of instructions in a data processing system is disclosed. The system comprises an instruction cache means for providing a first plurality of instructions and branch logic means for receiving the first plurality of instructions and for providing branch history information about the first plurality of instructions. The system further includes an auxiliary cache means for receiving a second plurality of instructions based upon the branch history information. The auxiliary cache means overlays at least a one of the second plurality of instructions if there is a branch in the first plurality of instructions and the branch is to the second plurality of instructions.
In a second aspect, a method for obtaining noncontiguous blocks of instruction comprises storing a first plurality of instructions in a first cache and fetching the first plurality of instructions in parallel with a fetch of a second plurality of instructions within a second cache. In the present invention, the number of the second plurality of instructions being greater than the number of first plurality of instructions. This second aspect includes replacing a portion of the second plurality of instructions with at least one of the first plurality of instructions based upon a branch history information of the data processing system.
The above-described present invention allows a processor to use branch history information and an auxiliary cache to fetch multiple noncontiguous groups of instructions in a single cycle. Furthermore, the technique allows noncontiguous fetching to be performed without requiring multiple levels of nested branch prediction logic to be evaluated in a single cycle.


REFERENCES:
patent: 4313158 (1982-01-01), Porter et al.
patent: 4719570 (1988-01-01), Kawabe

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Method and system for fetching noncontiguous instructions in... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Method and system for fetching noncontiguous instructions in..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and system for fetching noncontiguous instructions in... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2467695

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.