Electrical computers and digital processing systems: processing – Dynamic instruction dependency checking – monitoring or...
Reexamination Certificate
1999-10-01
2002-02-26
Eng, David Y. (Department: 2155)
Electrical computers and digital processing systems: processing
Dynamic instruction dependency checking, monitoring or...
Reexamination Certificate
active
06351803
ABSTRACT:
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates in general to microprocessors and, more particularly, to a system, method, and mechanism providing power efficient operation in a pipeline processor.
2. Relevant Background
Computer programs comprise a series of instructions that direct a data processing mechanism to perform specific operations on data. These operations including loading data from memory, storing data to memory, adding, multiplying, and the like. Data processors, including microprocessors, microcontrollers, and the like include a central processing unit (CPU) comprising one or more functional units that perform various tasks. Typical functional units include a decoder, an instruction cache, a data cache, an integer execution unit, a floating point execution unit, a load/store unit, and the like. A given program may run on a variety of data processing hardware.
Early data processors executed only one instruction at a time. Each instruction was executed to completion before execution of a subsequent instruction was begun. Each instruction typically requires a number of data processing operations and involves multiple functional units within the processor. Hence, an instruction may consume several clock cycles to complete. In serially executed processors each functional unit may be busy during only one step, and idle during the other steps. The serial execution of instructions results in the completion of less than one instruction per clock cycle.
As used herein the term “data processor” includes complex instruction set computers (CISC), reduced instruction set computers (RISC) and hybrids. A data processor may be a stand alone central processing unit (CPU) or an embedded system comprising a processor core integrated with other components to form a special purpose data processing machine. The term “data” refers to a digital or binary information that may represent memory addresses, data, instructions, or the like.
In response to the need for improved performance several techniques have been used to extend the capabilities of these early processors including pipelining, superpipelining, and superscaling. Pipelined architectures attempt to keep all the functional units of a processor busy at all times by overlapping execution of several instructions. Pipelined designs increase the rate at which instructions can be executed by allowing a new instruction to begin execution before a previous instruction is finished executing. A simple pipeline may have only five stages whereas an extended pipeline may have ten or more stages. In this manner, the pipeline hides the latency associated with the execution of any particular instruction.
The goal of pipeline processors is to execute multiple instructions per cycle (IPC). Due to pipeline hazards, actual throughput is reduced. Pipeline hazards include structural hazards, data hazards, and control hazards. Structural hazards arise when more than one instruction in the pipeline requires a particular hardware resource at the same time (e.g., two execution units requiring access to a single ALU resource in the same clock cycle). Data hazards arise when an instruction needs as input the output of an instruction that has not yet produced that output. Control hazards arise when an instruction changes the program counter (PC) because execution cannot continue until the target instruction from the new PC is fetched.
When hazards occur, the processor must stall or place “bubbles” (e.g., NOPs) in the pipeline until the hazard condition is resolved. This increases latency and decreases instruction throughput. As pipelines become longer, the likelihood of hazards increases. Hence, an effective mechanism for handling hazard conditions is important to achieving the benefits of deeper pipelines.
Another goal of many processors is to control the power used by the processor. Many applications, particularly those directed at mobile or battery operated environments, require low power usage. The execution pipelines of a computer consume a significant amount of power. Power consumption is largely caused by moving data between registers, files, and execution units. As data paths become wider, the power consumed to move the data increases.
Hence, in order to execute instructions efficiently at a high throughput within a pipeline it is important to coordinate and control the flow of instructions, operations, and data within the execution pipeline. The order and manner in which the operands and results of these instructions are made available to each other within the execution pipeline is of critical importance to the throughput of the pipeline.
SUMMARY OF THE INVENTION
The present invention involves a processor including a plurality of execution pipeline stages where each stage accepts a plurality of operand inputs and generates a result. A pipefile having at least the same number of entries as the number of execution pipeline stages is included in the processor. A pointer register is associated with each execution pipeline stage. A value is stored in at least one of the pointer registers, the value indicating a particular one of the entries in the pipefile.
The present invention involves a method, system and apparatus for forwarding data within a pipeline of a pipelined data processor having a plurality of execution pipeline stages where each stage accepts a plurality of operand inputs and generates a result. A pipefile is implemented having at least the same number of entries as the number of execution pipeline stages. Each new instruction is assigned to one of the entries in the pipefile before the new instruction is executed. The pipefile entry assignment remains valid while the instruction remains in any of the execution pipeline stages. The new instruction is passed through the execution pipeline stages to generate a result. Upon successful completion of executing the new instruction, the result is written back from the assigned pipefile entry to an architectural register.
The foregoing and other features, utilities and advantages of the invention will be apparent from the following more particular description of a preferred embodiment of the invention as illustrated in the accompanying drawings.
REFERENCES:
patent: 4814981 (1989-03-01), Rubinfeld
patent: 5251311 (1993-10-01), Kasai
patent: 5386565 (1995-01-01), Tanaka et al.
patent: 5423050 (1995-06-01), Taylor et al.
patent: 5434804 (1995-07-01), Bock et al.
patent: 5440705 (1995-08-01), Wang et al.
patent: 5448576 (1995-09-01), Russell
patent: 5452432 (1995-09-01), Macachor
patent: 5455936 (1995-10-01), Maemura
patent: 5479652 (1995-12-01), Dreyer et al.
patent: 5483518 (1996-01-01), Whetsel
patent: 5488688 (1996-01-01), Gonzales et al.
patent: 5530965 (1996-06-01), Kawasaki et al.
patent: 5570375 (1996-10-01), Tsai et al.
patent: 5590354 (1996-12-01), Klapproth et al.
patent: 5596734 (1997-01-01), Ferra
patent: 5598551 (1997-01-01), Barajas et al.
patent: 5608881 (1997-03-01), Masumura et al.
patent: 5613153 (1997-03-01), Arimilli et al.
patent: 5627842 (1997-05-01), Brown et al.
patent: 5657273 (1997-08-01), Ayukawa et al.
patent: 5682545 (1997-10-01), Kawasaki et al.
patent: 5704034 (1997-12-01), Circello
patent: 5708773 (1998-01-01), Jeppesen, III et al.
patent: 5724549 (1998-03-01), Selgas et al.
patent: 5737516 (1998-04-01), Circello et al.
patent: 5751621 (1998-05-01), Arakawa
patent: 5768152 (1998-06-01), Battaline et al.
patent: 5771240 (1998-06-01), Tobin et al.
patent: 5774701 (1998-06-01), Matsui et al.
patent: 5778237 (1998-07-01), Yamamoto et al.
patent: 5781558 (1998-07-01), Inglis et al.
patent: 5796978 (1998-08-01), Yoshioka et al.
patent: 5828825 (1998-10-01), Eskandari et al.
patent: 5832248 (1998-11-01), Kishi et al.
patent: 5835963 (1998-11-01), Yoshioka et al.
patent: 5848247 (1998-12-01), Matsui et al.
patent: 5860127 (1999-01-01), Shimazaki et al.
patent: 5862387 (1999-01-01), Songer et al.
patent: 5867726 (1999-02-01), Ohsuga et al.
patent: 5884092 (1999-03-01), Kiuchi et al.
patent: 5896550 (1999-04-01), Wehunt et al.
patent: 5918
Chua-Eoan Lew
Peng Chih-Jui
Eng David Y.
Hitachi , Ltd.
Townsend and Townsend / and Crew LLP
LandOfFree
Mechanism for power efficient processing in a pipeline... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Mechanism for power efficient processing in a pipeline..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Mechanism for power efficient processing in a pipeline... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2984786