Fast multiplication of floating point values and integer...

Electrical computers: arithmetic processing and calculating – Electrical digital calculating computer – Particular function performed

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C708S512000, C708S517000, C712S222000

Reexamination Certificate

active

06233595

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
This invention relates generally to the field of floating point arithmetic in microprocessors and software.
2. Description of the Related Art
Most microprocessors are configured to operate on multiple data types, with the most common data types being integer and floating point. Integer numbers are positive or negative whole numbers, i.e., they have no fractional component. In contrast, floating point numbers are real numbers and have both a fractional component and an exponent.
Each different data type is stored in memory using a different format. Turning now to
FIG. 1
, a diagram illustrating a number of the most common data type formats as implemented in the x86 instruction set is shown. As the figure illustrates, integers may be stored in three different precisions, word integer
200
, short integer
202
, and long integer
204
, each having the least significant bit stored as bit
0
. In order to represent both positive and negative integers, most instruction sets assume that two's complement notation will be used to store negative integers. To represent a negative integer in two's complement form, the magnitude (or absolute value) of the integer is inverted in a bit-wise manner, and a one is added to the least significant bit. For example, to negate +7
10
(0111
2
), each bit is inverted (1000
2
) to obtain the one's complement version. Then a constant one is added to the least significant bit to obtain −7
10
(1001
2
). Two's complement form is particularly useful because it allows positive and negative integers to be added using simple combinatory logic. Using the previous example, +7
10
(0111
2
)+−7
10
(1001
2
)=0
10
(0000
2
).
As
FIG. 1
illustrates, floating point numbers may also be stored in three different precisions; single precision
206
, double precision
208
, and extended double precision
210
. Floating point numbers are represented by a sign bit, an exponent, and a mantissa (or significand). An asserted sign bit represents a negative value, whereas an unasserted sign bit represents a positive value. A floating point number's base (usually 2) is raised to the power of the exponent and multiplied by the mantissa to arrive at the number represented. The mantissa comprises a number of bits used to represent the most significant digits of the number. Typically, the mantissa comprises one bit to the left of the decimal, and the remaining bits to the right of the decimal. The bit to the left of the decimal, known as the integer bit, is usually not explicitly stored. Instead, it is implied in the format of the number. Additional information regarding the floating point numbers and operations performed thereon may be obtained in the Institute of Electrical and Electronic Engineers (IEEE) Standard 754.
Floating point formats can represent numbers within a much larger range than integer formats. For example, a 32-bit signed integer can represent the integers between 2
31
-1 and -2
31
, when two's complement format is used. A single precision floating point number as defined by IEEE Standard 754 comprises 32 bits (a one bit sign, an 8 bit biased exponent, and a 23-bit mantissa) and has a range from approximately 2
−126
to 2
127
in both positive and negative numbers. A double precision (64-bit) floating point value has a range from approximately 2
−1022
and 2
1023
in both positive and negative numbers. Finally, an extended precision (80-bit) floating point number (in which the integer bit is explicitly stored) has a range from approximately 2
−16382
to 2
16383
in both positive and negative numbers.
Turning now to
FIG. 2
, more detail of each floating point precision is shown. Equation
216
a
represents a formula for determining the actual value of a number in single precision floating point format. As equation
216
b
illustrates, the exponent bias in single precision format is +127
10
. Similarly, equations
218
a
and
220
a
are formulas for determining the actual values of numbers in double and extended precision, respectively. The exponent bias for double precision is +1023
10
(see
218
b
), and the exponent bias for extended precision is +16,383
10
(see
220
b
).
In order to perform calculations more efficiently, microprocessors typically have optimized circuitry to execute arithmetic operations on each data type. The simplest circuit may be configured to form perform addition or subtraction on integer values. As shown in the example above, integer addition and subtraction may be performed using simple combinatorial and inverting logic. This logic is typically referred to as an adder. In contrast, the most complex circuitry is typically for performing multiplication or division on floating point values. This complex circuitry is typically referred to as a multiplier. Multipliers are complex because each multiplication operation is translated into a plurality of additions. The number of additions that must be performed increase with the bit-length of the operands. The multiplication of floating point values is further complicated, because the resulting product must be “normalized.” Normalization involves shifting the mantissa so that the most significant asserted bit is directly the right of the binary radix point. The product's exponent must also be incremented or decremented accordingly.
As a result of these inherent complications in multiplication, and floating point multiplication in particular, these instruction typically take significantly more clock cycles to complete than simple integer addition instructions. For example, a floating point multiplication may require on the order of five to ten clock cycles to complete, whereas an integer addition may be completed in just one clock cycle. Many microprocessors are configured with more than one adder configured to perform integer addition, which makes the effective throughput of integer addition instructions greater than one instruction per clock cycle. In contrast, microprocessors rarely have the die space available to implement more than one floating point multiplier. These factors contribute to an even greater disparity between integer addition and floating point multiplication performance.
Recent advances in microprocessor and software design have placed a greater emphasis upon arithmetic performance than ever before. Applications such as 3D graphics rendering and texture mapping rely heavily upon a microprocessor's ability to quickly execute large numbers of arithmetic operations, and floating point arithmetic operations in particular. Another application placing even heavier demands upon a microprocessor's floating point arithmetic capabilities is the compression and decompression of digital audio and video data. As a result, a method and apparatus for increasing a microprocessor's ability to rapidly execute floating point arithmetic instructions is needed.
SUMMARY
The problems outlined above are in large part solved by a method for performing fast multiplication in accordance with the present invention.
In one embodiment, the method involves detecting multiplication operations that have a floating point operand and an integer operand than is an integer power of two, e.g., ±2
−1
, ±2
0
, ±2
1
, ±2
2
, et seq. Once detected, the multiplication operations are executed by using an adder to sum the integer power and the floating point operand's exponent. Advantageously, the floating point multiplication instruction is executed using the faster integer adder in lieu of the slower floating point multiplier. In order to support positive and negative values, the method may further comprise inverting the floating point operand's sign bit to generate the product's sign bit if the integer operand is negative.
Also contemplated is a method for accelerating data decompression. In one embodiment the method comprises detecting floating point operands that are to be multiplied by an integer

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Fast multiplication of floating point values and integer... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Fast multiplication of floating point values and integer..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Fast multiplication of floating point values and integer... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2480945

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.