Electrical computers and digital processing systems: processing – Processing control – Arithmetic operation instruction processing
Reexamination Certificate
1999-09-10
2002-07-23
Pan, Daniel H. (Department: 2183)
Electrical computers and digital processing systems: processing
Processing control
Arithmetic operation instruction processing
C712S228000, C712S245000, C712S217000, C708S495000, C708S525000, C708S501000, C708S204000, C711S123000
Reexamination Certificate
active
06425074
ABSTRACT:
BACKGROUND OF THE INVENTION
1. Field of the Invention
This invention relates generally to the field of microprocessors and, more particularly, to floating point units within microprocessors.
2. Description of the Related Art
Most microprocessors must support multiple data types. For example, x86-compatible microprocessors must execute two types of instructions; one set defined to operate on integer data types and another set defined to operate on floating point data types. In contrast with integers, floating point numbers have fractional components and are typically represented in exponent-significant format. For example, the values 2.15H10
3
and −10.5 are floating point numbers while the numbers −1, 0, and 7 are integers. The term “floating point” is derived from the fact that there is no fixed number of digits before or after the decimal point, i.e., the decimal point can float. Using the same number of bits, the floating point format can represent numbers within a much larger range than integer format. For example, a 32-bit signed integer can represent the integers between −2
31
and 2
31
−1 (using two's complement format). In contrast, a 32-bit (“single precision”) floating point number as defined by the Institute of Electrical and Electronic Engineers (IEEE) Standard 754 has a range (in normalized format) from 2
−126
to 2
127
×(2−2
−23
) in both positive and negative numbers.
FIG. 1
illustrates an exemplary format for an 8-bit integer
100
. As the figure illustrates, negative integers are represented using the two's complement format
106
. To negate an integer, all bits are inverted to obtain the one's complement format
102
. A constant
104
of one is then added to the least significant bit (LSB).
FIG. 2
shows an exemplary format for a floating point value. Value
110
is a 32-bit (single precision) floating point number. Value
110
is represented by. a significant
112
(23 bits), a biased exponent
114
(8 bits), and a sign bit
116
. The base for the floating point number (2 in this case) is raised to the power of the exponent and multiplied by the significand to arrive at the number represented. In microprocessors, base
2
is most common. The significand comprises a number of bits used to represent the most significant digits of the number. Typically, the significand comprises one bit to the left of the radix point and the remaining bits to the right of the radix point. A number in this form is said to be “normalized”. In order to save space, in some formats the bit to the left of the radix point, known as the integer bit, is not explicitly stored. Instead, it is implied in the format of the number.
Floating point values may also be represented in 64-bit (double precision) or 80-bit (extended precision) format. As with the single precision format, a double precision format value is represented by a significand (52 bits), a biased exponent (11 bits), and a sign bit. An extended precision format value is represented by a significand (64 bits), a biased exponent (15 bits), and a sign bit. However, unlike the other formats, the significand in extended precision includes an explicit integer bit. Additional information regarding floating point number formats may be obtained in IEEE Standard 754.
The recent increased demand for graphics-intensive applications (e.g., 3D games and virtual reality programs) has placed greater emphasis on a microprocessor's floating point performance. Given the vast amount of software available for x86 microprocessors, there is particularly high demand for x86-compatible microprocessors having high performance floating point units. Thus, microprocessor designers are continually seeking new ways to improve the floating point performance of x86-compatible microprocessors. While some x86 floating Point instructions perform arithmetic (e.g., FADD which adds two floating point numbers), other floating point instructions perform logic functions. For example, the instruction FCOM performs a comparison of two real values. Other examples of x86 floating point instructions that perform compares are FTST (compares top of stack with zero) and FICOM (compare integer). Still other x86 floating point instructions perform control functions. For example, the instruction FSTSW stores the floating point unit's architectural status word to a specified destination (e.g., memory or the integer register AX).
One technique used by microprocessor designers to improve the performance of all floating point instructions is pipelining. In a pipelined microprocessor, the microprocessor begins executing a second instruction before the first has been completed. Thus, several instructions are in the pipeline simultaneously, each at a different processing stage. The pipeline is divided into a number of pipeline stages, and each stage can execute its operation concurrently with the other stages. When a stage completes an operation, it passes the result to the next stage in the pipeline and fetches the next operation from the preceding stage. The final results of each instruction emerge at the end of the pipeline in rapid succession.
Another popular technique used to improve floating point performance is out-of-order execution. Out-of-order execution involves reordering the instructions being executed (to the extent allowed by dependencies) so as to keep as many of the microprocessor's floating point execution units as busy as possible. As used herein, a microprocessor may have a number of execution units (also called functional units), each optimized to perform a particular task or set of tasks. For example, one execution unit may be optimized to perform integer addition, while another execution unit may be configured to perform floating point addition.
Typical pipeline stages in a modern microprocessor include fetching, decoding, address generation, scheduling, execution, and retiring. Fetching entails loading the instruction from the instruction cache. Decoding involves examining the fetched instruction to determine how large it is, whether or not it requires an access to memory to read data for execution, etc. Address generation involves calculating memory addresses for instructions that access memory. Scheduling involves the task of determining which instructions are available to be executed and then conveying those instructions and their associated data to the appropriate execution units. The execution stage actually executes the instructions based on information provided by the earlier stages. After the instruction is executed, the results produced are written back either to an internal register or the system memory during the retire stage.
While pipelining produces significant improvements in performance, it has some limitations. In particular, certain instructions in certain floating point implementations are unable to be scheduled until all previous instructions have completed execution and have been retired (i.e., committed to the processor's architectural state). One such instruction is FSTSW (floating point store status word). The FSTSW instruction is configured to access the floating point unit's architectural floating-point status word. As a result, the FSTSW instruction may be referred to as a “bottom executing” instruction because it is not scheduled for execution until all preceding instructions have been executed and retired. Furthermore, instructions occurring after the FSTSW instruction may not be scheduled until after the FSTSW instruction has been scheduled. These problems may be exacerbated when two FSTSW instructions occur near each other in the instruction stream.
Thus, an efficient method for rapidly executing FSTSW-type instructions is desired. In modern x86 floating point software, a significant percentage of FSTSW occurrences are immediately preceded by a floating point compare instructions, e.g., FCOM (floating point compare), FTST (compares top of stack with zero), or FICOM (compare integer instruction). Thus an efficient method for rapidly executing F
Juffa Norbert
Meier Stephan G.
Oberman Stuart F.
Weber Frederick D.
Advanced Micro Devices , Inc.
Conley Rose & Tayon PC
Kivlin B. Noäl
Pan Daniel H.
LandOfFree
Method and apparatus for rapid execution of FCOM and FSTSW does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Method and apparatus for rapid execution of FCOM and FSTSW, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and apparatus for rapid execution of FCOM and FSTSW will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2879037