Electrical computers and digital processing systems: processing – Processing control – Branching
Reexamination Certificate
1999-10-14
2003-11-11
Coleman, Eric (Department: 2183)
Electrical computers and digital processing systems: processing
Processing control
Branching
C712S237000
Reexamination Certificate
active
06647490
ABSTRACT:
BACKGROUND OF THE INVENTION
1. Field of the Invention
This invention is related to the field of processors and, more particularly, to instruction fetching mechanisms within processors.
2. Description of the Related Art
Superscalar processors achieve high performance by executing multiple instructions per clock cycle and by choosing the shortest possible clock cycle consistent with the design. As used herein, the term “clock cycle” refers to an interval of time accorded to various stages of an instruction processing pipeline within the processor. Storage devices (e.g. registers and arrays) capture their values according to the clock cycle. For example, a storage device may capture a value according to a rising or falling edge of a clock signal defining the clock cycle. The storage device then stores the value until the subsequent rising or falling edge of the clock signal, respectively. The term “instruction processing pipeline” is used herein to refer to the logic circuits employed to process instructions in a pipelined fashion. Although the pipeline may be divided into any number of stages at which portions of instruction processing are performed, instruction processing generally comprises fetching the instruction, decoding the instruction, executing the instruction, and storing the execution results in the destination identified by the instruction.
A popular instruction set architecture is the ×86 instruction set architecture. Due to the widespread acceptance of the ×86 instruction set architecture in the computer industry, superscalar processors designed in accordance with this architecture are becoming increasingly common. The ×86 instruction set architecture specifies a variable byte-length instruction set in which different instructions may occupy differing numbers of bytes. For example, the 80386 and 80486 processors allow a particular instruction to occupy a number of bytes between 1 and 15. The number of bytes occupied depends upon the particular instruction as well as various addressing mode options for the instruction.
Because instructions are variable-length, locating instruction boundaries is complicated. The length of a first instruction must be determined prior to locating a second instruction subsequent to the first instruction within an instruction stream. However, the ability to locate multiple instructions within an instruction stream during a particular clock cycle is crucial to superscalar processor operation. As operating frequencies increase (i.e. as clock cycles shorten), it becomes increasingly difficult to locate multiple instructions simultaneously.
Various predecode schemes have been proposed in which a predecoder appends information regarding each instruction byte to the instruction byte as the instruction is stored into the cache. As used herein, the term “predecoding” is used to refer to generating instruction decode information prior to storing the corresponding instruction bytes into an instruction cache of a processor. The generated information may be stored with the instruction bytes in the instruction cache. For example, an instruction byte may be indicated to be the beginning or end of an instruction. By scanning the predecode information when the corresponding instruction bytes are fetched, instructions may be located without actually attempting to decode the instruction bytes. The predecode information may be used to decrease the amount of logic needed to locate multiple variable-length instructions simultaneously. Unfortunately, these schemes become insufficient at high clock frequencies as well. A method for locating multiple instructions during a clock cycle at high frequencies is needed.
SUMMARY OF THE INVENTION
The problems outlined above are in large part solved by a line predictor as described herein. The line predictor caches alignment information for instructions. In response to each fetch address, the line predictor provides information for the instruction beginning at the fetch address, as well as up to one or more additional instructions subsequent to that instruction. The alignment information may be, for example, instruction pointers, each of which directly locates a corresponding instruction within a plurality of instruction bytes fetched in response to the fetch address. The line predictor may include a memory having multiple entries, each entry storing up to a predefined maximum number of instruction pointers and a fetch address corresponding to the instruction identified by a first one of the instruction pointers. Additionally, each entry may include a link to another entry storing instruction pointers to the next instructions within the predicted instruction stream, and a next fetch address corresponding to the first instruction within the next entry. The next fetch address may be provided to the instruction cache to fetch the corresponding instruction bytes.
If the terminating instruction within the entry is a conditional branch instruction, the next fetch address may be the branch target address or the sequential address, depending upon the condition outcome. On the other hand, if the terminating instruction within the entry is an indirect branch instruction, the next fetch address (i.e. the branch target address) may be variable based on the operands of the indirect branch instruction. If the terminating instruction is a return instruction, the next fetch address is the sequential address to the most recent call instruction. Accordingly, the line predictor is trained with respect to the next fetch address (and next index within the line predictor, which provides the link to the next entry). As line predictor entries are created, a set of branch predictors may be accessed to provide an initial next fetch address and index. The initial training is verified by accessing the branch predictors at each fetch of the line predictor entry, and updated as dictated by the state of the branch predictors at each fetch.
For example, conditional branches may be predicted taken or not-taken, and the next fetch address may be set to the branch target address or sequential address accordingly. Additionally, a next alternate fetch address (and index within the line predictor) may be stored for the entry (corresponding to the non-predicted target or sequential path). If the prediction stored in the line predictor entry disagrees with the branch predictor during a particular fetch, the alternate fetch address and index may be used. Furthermore, the line predictor may swap the next fetch and next alternate fetch fields to reflect the more recent prediction.
The branch predictors may include an indirect branch target cache and a return stack for predicting indirect branch target addresses and return addresses, respectively. The next fetch address in an entry terminated by an indirect branch instruction may be verified against a predicted address from the indirect branch target cache. Similarly, the next fetch address in an entry terminated by a return instruction may be verified against the top of the return stack. If a mismatch occurs, the predicted address from the corresponding branch predictor is used and the line predictor entry is updated with the newly predicted address. The line predictor may provide a rapid means for providing next fetch addresses (and next indexes), while the branch predictors may, in parallel, provide accurate branch predictions for the branches. The line predictor may be updated to reflect the branch predictor state, thereby tracking the branch predictors.
Broadly speaking, a processor is contemplated, comprising a fetch address generation unit configured to generate a fetch address and a line predictor coupled to the fetch address generation unit. The line predictor includes a first memory comprising a plurality of entries, each entry storing a plurality of instruction pointers and a next entry indication. The line predictor is configured to select a first entry (of the plurality of entries) corresponding to the fetch address. If one of a first plurality of instruction pointers within the fi
Keller James B.
Matus Francis M.
Schakel Keith R.
Sharma Puneet
Advanced Micro Devices , Inc.
Coleman Eric
Merkel Lawrence J.
Meyertons Hood Kivlin Kowert & Goetzel P.C.
LandOfFree
Training line predictor for branch targets does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Training line predictor for branch targets, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Training line predictor for branch targets will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3122008