Electrical computers and digital processing systems: processing – Processing control – Arithmetic operation instruction processing
Reexamination Certificate
2000-02-25
2003-12-30
Pan, Daniel H. (Department: 2783)
Electrical computers and digital processing systems: processing
Processing control
Arithmetic operation instruction processing
C712S221000, C712S227000, C708S204000, C708S205000, C708S495000
Reexamination Certificate
active
06671796
ABSTRACT:
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates generally to processors and, more particularly, to instructions for use with processors.
2. Related Art
Current processors support mathematical operations on real numbers, using either fixed point or floating point representations. Floating point values are typically represented in binary format as an exponent and a mantissa. The exponent represents a power to which a base number such as 2 is raised and the mantissa is a number to be multiplied by the base number. Accordingly, the actual number represented by a floating point value is the mantissa multiplied by a quantity equal to the base number raised to a power specified by the exponent. In such a manner, any particular number may be approximated in floating point notation as fxB
e
or (f,e) where f is an n-digit signed mantissa, e is an m-digit signed integer exponent and B is the base number system. In most computer systems, the base number system used is the binary number system where B=2, although some systems use the decimal number system (B=10) or the hexadecimal number system (B=16) as their base number system. Floating point values may be added, subtracted, multiplied, or divided and computing structures for performing these arithmetic operations on binary floating point values are well known in the art.
Fixed point values, by contrast, are made up of an integer followed by a fraction. The number of digits used for the integer and the fractional parts of the fixed point value can be varied, even though, for convenience, the total number of bits typically remains constant. As a result, multiple formats for representing a fixed point value can be supported by a microprocessor by simply varying the number of bits used for the integer and the fractional parts.
Similarly, a series of floating point formats exist which represent different trade offs between the precision and range of numbers (largest to smallest) representable, storage requirements, and cycles required for computing arithmetic results. In general, longer formats trade increased storage requirements and decreased speed of arithmetic operations (mainly multiplication and division operations) for greater precision and available range.
ANSI IEEE Standard 754 defines several floating point formats including single-precision, double-precision, and extended double-precision. Referring to
FIG. 5
, the format of a 32-bit single precision floating point value is broken into a one-bit sign field “s,” an eight-bit biased exponent field “exp,” a so called “hidden” bit (which although not explicitly represented, is assumed to be a one just left of the implied binary point), and a 23-bit “mantissa.”
Both floating point and fixed point values are typically used on current microprocessors. On general-purpose processors, translations between fixed and floating point values are typically performed in software, thereby requiring multiple instructions to be executed by the processor in order to perform a single translation. Conversion from fixed point to floating point datatypes are needed for several reasons. Fixed point basic arithmetic operations are simpler and usually have a smaller latency than floating point operations. Floating point datatypes, on the other hand, generally cover a wide range of values and dynamically adjust to maintain the most significant bits of the results. Acquired signals from external devices such as visual and auditory data in general use fixed point representations. Floating point computations on these data are sometimes preferred.
SUMMARY OF THE INVENTION
The present invention provides a method and apparatus for efficient conversion operations between floating point and fixed point values to be performed in a general purpose processor. This is achieved by providing an instruction for converting an arbitrary fixed point value fx into a floating point value fl in a general purpose processor.
Accordingly, the invention advantageously provides a general purpose processor with the ability to execute conversion operation between arbitrary fixed-point and floating-point values with a single instruction compared with prior art general purpose processors that require multiple instructions to perform the same function. Thus, the general purpose processor of the present invention allows for more efficient and faster conversion operations between fixed-point and floating-point values.
REFERENCES:
patent: 4150434 (1979-04-01), Shibayama et al.
patent: 4511990 (1985-04-01), Hagiwara et al.
patent: 6480872 (2002-11-01), Choquette
Chan Jeffrey Meng Wah
Deering Michael F.
Nelson Scott R.
Sudharsanan Subramania
Tremblay Marc
Pan Daniel H.
Sun Microsystems Inc.
Zagorin O'Brien & Graham LLP
LandOfFree
Converting an arbitrary fixed point value to a floating... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Converting an arbitrary fixed point value to a floating..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Converting an arbitrary fixed point value to a floating... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3115069