Rounding denormalized numbers in a pipelined floating point...

Electrical computers: arithmetic processing and calculating – Electrical digital calculating computer – Particular function performed

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

Reexamination Certificate

active

06721772

ABSTRACT:

TECHNICAL FIELD OF THE INVENTION
The present invention is directed, in general, to processors and, more particularly, to rounding denormalized numbers in a pipelined floating point unit (FPU) without pipeline stalls.
BACKGROUND OF THE INVENTION
The ever-growing requirement for high performance computers demands that computer hardware architectures maximize software performance. Conventional computer architectures are made up of three primary components: (1) a processor, (2) a system memory and (3) one or more input/output devices. The processor controls the system memory and the input/output (“I/O”) devices. The system memory stores not only data, but also instructions that the processor is capable of retrieving and executing to cause the computer to perform one or more desired processes or functions. The I/O devices are operative to interact with a user through a graphical user interface (“GUI”) (such as provided by Microsoft Window™ or IBM OS/2™), a network portal device, a printer, a mouse or other conventional device for facilitating interaction between the user and the computer.
Over the years, the quest for ever-increasing processing speeds has followed different directions. One approach to improve computer performance is to increase the rate of the clock that drives the processor. As the clock rate increases, however, the processor's power consumption and temperature also increase. Increased power consumption is expensive and high circuit temperatures may damage the processor. Further, the processor clock rate may not increase beyond a threshold physical speed at which signals may traverse the processor. Simply stated, there is a practical maximum to the clock rate that is acceptable to conventional processors.
An alternate approach to improve computer performance is to increase the number of instructions executed per clock cycle by the processor (“processor throughput”). One technique for increasing processor throughput is pipelining, which calls for the processor to be divided into separate processing stages (collectively termed a “pipeline”). Instructions are processed in an “assembly line” fashion in the processing stages. Each processing stage is optimized to perform a particular processing function, thereby causing the processor as a whole to become faster.
“Superpipelining” extends the pipelining concept further by allowing the simultaneous processing of multiple instructions in the pipeline. Consider, as an example, a processor in which each instruction executes in six stages, each stage requiring a single clock cycle to perform its function. Six separate instructions can therefore be processed concurrently in the pipeline; i.e., the processing of one instruction is completed during each clock cycle. The instruction throughput of an n-stage pipelined architecture is therefore, in theory, n times greater than the throughput of a non-pipelined architecture capable of completing only one instruction every n clock cycles.
Another technique for increasing overall processor speed is “superscalar” processing. Superscalar processing calls for multiple instructions to be processed per clock cycle. Assuming that instructions are independent of one another (the execution of each instruction does not depend upon the execution of any other instruction), processor throughput is increased in proportion to the number of instructions processed per clock cycle (“degree of scalability”). If, for example, a particular processor architecture is superscalar to degree three (i.e., three instructions are processed during each clock cycle), the instruction throughput of the processor is theoretically tripled.
These techniques are not mutually exclusive; processors may be both superpipelined and superscalar. However, operation of such processors in practice is often far from ideal, as instructions tend to depend upon one another and are also often not executed efficiently within the pipeline stages. In actual operation, instructions often require varying amounts of processor resources, creating interruptions (“bubbles” or “stalls”) in the flow of instructions through the pipeline. Consequently, while superpipelining and superscalar techniques do increase throughput, the actual throughput of the processor ultimately depends upon the particular instructions processed during a given period of time and the particular implementation of the processor's architecture.
The speed at which a processor can perform a desired task is also a function of the number of instructions required to code the task. A processor may require one or many clock cycles to execute a particular instruction. Thus, in order to enhance the speed at which a processor can perform a desired task, both the number of instructions used to code the task as well as the number of clock cycles required to execute each instruction should be minimized.
Statistically, certain instructions are executed more frequently than others are. If the design of a processor is optimized to rapidly process the instructions which occur most frequently, then the overall throughput of the processor can be increased. Unfortunately, the optimization of a processor for certain frequent instructions is usually obtained only at the expense of other less frequent instructions, or requires additional circuitry, which increases the size of the processor.
As computer programs have become increasingly more graphic-oriented, processors have had to deal more and more with the operations on numbers in floating point notation. Thus, to enhance the throughput of a processor that must generate, for example, data necessary to represent graphical images, it is desirable to optimize the processor to efficiently process numbers in floating point notation.
One aspect of operations involving numbers in floating point notation is “rounding”, which is basically the increasing or decreasing of the least significant bit of a floating point operand to conform the operand to a desired degree of precision; the IEEE Standard 754 defines the formats for various levels of precision. In an FPU, rounding operations may be required in combination with a floating-point adder unit (“FAU”), a floating-point multiplication unit (“FMU”), and a store unit. To simplify the design and fabrication of the FPU, it is desirable to employ a rounding unit that is “modularized”, i.e., which can be universally employed, without modification, in combination with a FAU, FMU, or floating-point store unit.
Implementation of the IEEE 754 standard for rounding has always posed a challenge for FPU designers. The rounding process is complicated by the fact that the Intel x87 architecture supports denormal numbers and gradual underflow. Rounding for numbers in the subnormal range is a function of the method by which the numbers are stored in the machine; storing denormal numbers in the normal format helps to eliminate a normalization step that would otherwise be required when such numbers are operated upon, but poses a problem in the rounding step due to the variable location of the decimal point.
Therefore, what is needed in the art is a system and method for rounding denormalized numbers and a processor employing the same. Preferably, the system or method is embodied in a modular circuit that is suitably operative in combination with a FAU, a FMU, and a floating-point store unit.
SUMMARY OF THE INVENTION
To address the above-discussed deficiencies of the prior art, it is a primary object of the present invention to provide rounding logic capable of handling denormalized numbers and a processor employing the same.
In the attainment of the above primary object, the present invention provides, for use in a processor having a floating point unit (FPU) capable of managing denormalized numbers in floating point notation, logic circuitry for, and a method of, generating least significant (L), round (R) and sticky (S) bits for a denormalized number. In one embodiment, the system includes: (1) a bit mask decoder that produces a bit mask that is a function of a precision of the denormalized number and an e

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Rounding denormalized numbers in a pipelined floating point... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Rounding denormalized numbers in a pipelined floating point..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Rounding denormalized numbers in a pipelined floating point... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3205945

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.