Pipeline throughput via parallel out-of-order execution of...

Electrical computers and digital processing systems: processing – Dynamic instruction dependency checking – monitoring or...

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C712S217000, C712S219000

Reexamination Certificate

active

06195745

ABSTRACT:

BACKGROUND
System Overview
U.S. Pat. No. 5,226,126, ('126) PROCESSOR HAVING PLURALITY OF FUNCTIONAL UNITS FOR ORDERLY RETIRING OUTSTANDING OPERATIONS BASED UPON ITS ASSOCIATED TAGS, to McFarland et al., issued Jul. 6, 1993, which is assigned to the assignee of the present invention, described a high-performance X86 processor that defines the system context in which the instant invention finds particular application, and is hereby incorporated by reference. The processor has multiple function units capable of performing parallel speculative execution. The function units include a Numerics Processor unit (NP), an Integer Execution Unit (IEU), and an Address Preparation unit (AP).
Instructions are fetched and decoded by a DECoder unit (DEC), which generates tagged pseudo-operations (p-ops) that are broadcast to the functional units. Each instruction will result in one or more p-ops being issued. For the purpose of this invention the terms p-op and operation are used interchangeably. Each operation executed by the processor may correspond to one instruction or to one p-op of a multi-p-op instruction.
DEC “relabels” (or reassigns) the “virtual” register specifiers used by the instructions into physical register specifiers that are part of each p-op. This allows DEC to transparently manage physical register files within the execution units. Register relabeling (reassignment) is integral to the processor's ability to perform speculative execution. The p-ops could be viewed as very wide horizontal (largely unencoded) control words. The wide horizontal format is intended to greatly facilitate or eliminate any further decoding by the execution units. DEC performs branch prediction and speculatively issues p-ops past up to two unresolved branches. I.e., DEC fetches down and pre-decodes instructions for up to three instruction streams.
The AP unit contains a relabeled virtual copy of the general purpose registers and segment registers and has the hardware resources for performing segmentation and paging of virtual memory addresses. AP calculates addresses for all memory operands, control transfers (including protected-mode gates), and page crosses.
IEU also contains a relabeled virtual copy of the general purpose registers and segment registers (kept coherent with AP's copy) and has the hardware resources for performing integer arithmetic and logical operations. NP contains the floating-point register file and has the floating-point arithmetic hardware resources.
Each execution unit has its own queue into which incoming p-ops are placed pending execution. The execution units are free to execute their p-ops largely independent of the other execution units. Consequently, p-ops may be executed out-of-order. When a unit completes executing a p-op it sends terminations back to DEC. DEC evaluates the terminations, choosing to retire or abort the outstanding p-ops as appropriate, and subsequently commands the function units accordingly. Multiple p-ops may be retired or aborted simultaneously. A p-op may be aborted because it was downstream of a predicted branch that was ultimately resolved as being mispredicted, or because it was after a p-op that terminated abnormally, requiring intervening interrupt processing.
Aborts cause the processor state to revert to that associated with some previously executed operation. Aborts are largely transparent to the execution units, as most processor state reversion is managed through the dynamic register relabeling specified by DEC in subsequently issued p-ops.
Data Interlocks in the Existing System
Instructions that require memory or I/O references require that an effective address computation be performed. The address computation typically include references to register values that have been computed for previous instructions. An effective address may include references to a displacement field from the instruction and to base and index registers from the register file.
For the purpose of this discussion, instructions can be roughly divided into two classes: those that operate on a program's data and those that are used to compute address components such as base register and index register values. While the results of these two classes interact, there is a fair degree of independence between the classes. For example, the results of a divide instruction are not typically used as a basis for computing an address to access memory. Such an independence can not be guaranteed, but the dynamic occurrences of instructions that effect only future address computations are frequent enough to be interesting.
Instruction sequences typically have mixes of the two instruction classes. The inventors of the present invention discovered that situations can and frequently do occur in the X86 applications where a non-address class instruction precedes an address class instruction which does not depend upon tile result of the non-address class instruction, and the address class instruction precedes an instruction of either class that requires an address computation. Consider the following example:
DIVIDE
R3 <-- R3 op immediate value
(non-address class)
ADD
R5 <-- R5 op R6
(address class)
SUB
R3 <-- R3 op memory [R5 +
(requires address
displacement value]
computation).
When a dedicated function unit is used to process addresses, it must wait for the execution unit to finish the non-address class instruction (the DIVIDE, in the example shown) and then finish the address class instruction (the ADD) before it can proceed (with the SUB). This dependency causes an interlock of the address unit until the register value needed for the effective address becomes available.
Problems of the System Discovered by the Inventors
New designs are needed to continually improve the performance/cost ratio and stay ahead of competitive microarchitectures. As was demonstrated by example supra, the expensive hardware resources of the AP are frequently not being fully exploited due to data dependencies. It is desirable to remove such dependencies and otherwise improve performance without adversely affecting either new product schedules or cost. Thus, minor logic additions that can result in increased performance over the existing design are needed. Due to the extensive verification and compatibility testing required following changes to function units, it is further desirable to increase performance with minimal or no changes to these units.
The obvious thing to do, to increase performance in a multiple execution unit processor, is to add an additional function unit identical to an existing unit. The existing IEU makes use of a simple single-owner history stack mechanism for flag-register values. To add an additional integer execution unit would appear to require a different, multiple-owner, method for restoring flag state following an abort of a speculatively executed instruction. Such a modification would appear to require a significant increase in hardware and would significantly change the existing integer execution unit, requiring extensive verification and compatibility testing. It will be seen that the inventors did not follow this path.
SUMMARY
The existing execution units of a high-performance processor are augmented by the addition of a supplemental integer execution unit, termed the Add/Move Unit (AMU), which performs select adds and moves in parallel and out-of-order with respect to the other execution units. At small incremental cost, AMU enables better use of the expensive limited resources of an existing Address Preparation unit (AP), which handles linear and physical address generation for memory operand references, control transfers, and page crosses. AMU removes data dependencies and thereby increases the available instruction level parallelism. The increased instruction level parallelism is readily exploited by the processor's ability to perform out-of-order and speculative execution, and performance is enhanced as a result.
It is a first object of the instant invention to reduce stalls in the generation of effective addresses, and the

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Pipeline throughput via parallel out-of-order execution of... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Pipeline throughput via parallel out-of-order execution of..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Pipeline throughput via parallel out-of-order execution of... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2597479

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.