Floating point addition pipeline including extreme value,...

Electrical computers: arithmetic processing and calculating – Electrical digital calculating computer – Particular function performed

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C708S495000

Reexamination Certificate

active

06397239

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
This invention relates to floating point arithmetic within microprocessors, and more particularly to an add/subtract pipeline within a floating point arithmetic unit.
2. Description of the Related Art
Numbers may be represented within computer systems in a variety of ways. In an integer format, for example, a 32-bit register may store numbers ranging from 0 to 2
32
−1. (The same register may also signed numbers by giving up one order of magnitude in range). This format is limiting, however, since it is incapable of representing numbers which are not integers (the binary point in integer format may be thought of as being to the right of the least significant bit in the register).
To accommodate non-integer numbers, a fixed point representation may be used. In this form of representation, the binary point is considered to be somewhere other than to the right of the least significant bit. For example, a 32-bit register may be used to store values from 0 (inclusive) to 2 (exclusive) by processing register values as though the binary point is located to the right of the most significant register bit. Such a representation allows (in this example) 31 registers bit to represent fractional values. In another embodiment, one bit may be used as a sign bit so that a register can store values between −2 and +2.
Because the binary point is fixed within a register or storage location during fixed point arithmetic operations, numbers with differing orders of magnitude may not be represented with equal precision without scaling. For example, it is not possible to represent both 1101b (13 in decimal) and 0.1101 (0.8125 in decimal) using the same fixed point representation. While fixed point representation schemes are still quite useful, many applications require a large dynamic range (the ratio of the largest number representation to the smallest, non-zero, number representation in a given format).
In order to solve this problem of dynamic range, floating point representation and arithmetic is widely used. Generally speaking, floating point numeric representations include three parts: a sign bit, an unsigned fractional number, and an exponent value. The most widespread floating point format in use today, IEEE standard 754 (single precision), is depicted in FIG.
1
.
Turning now to
FIG. 1
, floating point format
2
is shown. Format
2
includes a sign bit
4
(denoted as S), an exponent portion
6
(E), and a mantissa portion
8
(F). Floating point values represented in this format have a value V, where V is given by:
V
=(−1)
S
·2
E−bias
·(1.
F
).   (1)
Sign bit S represents the sign of the entire number, while mantissa portion F is a 23-bit number with an implied leading 1 bit (values with a leading one bit are said to be “normalized”). In other embodiments, the leading one bit may be explicit. Exponent portion E is an 8-bit value which represents the true exponent of the number V offset by a predetermined bias. A bias is used so that both positive and negative true exponents of floating point numbers may be easily compared. The number 127 is used as the bias in IEEE standard 754. Format
2
may thus accommodate numbers having exponents from −127 to +128. Floating point format
2
advantageously allows 24 bits of representation within each of these orders of magnitude.
Floating point addition is an extremely common operation in numerically-intensive applications. (Floating point subtraction is accomplished by inverting one of the inputs and performing addition). Although floating point addition is related to fixed point addition, two differences cause complications. First, an exponent value of the result must be determined from the input operands. Secondly, rounding must be performed. The IEEE standard specifies that the result of an operation should be the same as if the result was computed exactly, and then rounded (to a predetermined number of digits) using the current rounding mode. IEEE standard 754 specifies four rounding modes: round to nearest, round to zero, round to +∞, and round to −∞. The default mode, round to nearest, chooses the even number in the event of a tie.
Turning now to
FIG. 2
, a prior art floating point addition pipeline
10
is depicted. All steps in pipeline
10
are not performed for all possible additions. (That is, some steps are optional for various cases of inputs). The stages of pipeline
10
are described below with reference to input values A and B. Input value A has a sign bit A
S
, an exponent value A
E
, and a mantissa value A
F
. Input value B, similarly, has a sign bit B
S
, exponent value B
E
, and mantissa value B
F
.
Pipeline
10
first includes a stage
12
, in which an exponent difference E
diff
is calculated between A
E
and B
E
. In one embodiment, if E
diff
is calculated to be negative, operands A and B are swapped such that A is now the larger operand. In the embodiment shown in
FIG. 2
, the operands are swapped such that E
diff
is always positive.
In stage
14
, operands A and B are aligned. This is accomplished by shifting operand B E
diff
bits to the right. In this manner, the mantissa portions of both operands are scaled to the same order of magnitude. If A
E
=B
E
, no shifting is performed; consequently, no rounding is needed. If E
diff
>0, however, information must be maintained with respect to the bits which are shifted rightward (and are thus no longer representable within the predetermined number of bits). In order to perform IEEE rounding, information is maintained relative to 3 bits: the guard bit (G), the round bit (R), and the stick bit (S). The guard bit is one bit less significant than the least significant bit (L) of the shifted value, while the round bit is one bit less significant the guard bit. The sticky bit is the logical-OR of all bits less significant than R. For certain cases of addition, only the G and S bits are needed.
In stage
16
, the shifted version of operand B is inverted, if needed, to perform subtraction. In some embodiments, the signs of the input operands and the desired operation (either add or subtract) are examined in order to determine whether effective addition or effective subtraction is occurring. In one embodiment, effective addition is given by the equation:
EA=A
S
⊕B
S
⊕op,
  (2)
where op is 0 for addition and 1 for subtraction. For example, the operation A minus B, where B is negative, is equivalent to A plus B (ignoring the sign bit of B). Therefore, effective addition is performed. The inversion in stage
16
may be either of the one's complement or two's complement variety.
In stage
18
, the addition of operand A and operand B is performed. As described above, operand B may be shifted and may be inverted as needed. Next, in stage
20
, the result of stage
18
may be recomplemented, meaning that the value is returned to sign-magnitude form (as opposed to one's or two's complement form).
Subsequently, in stage
22
, the result of stage
20
is normalized. This includes left-shifting the result of stage
20
until the most significant bit is a 1. The bits which are shifted in are calculated according to the values of G, R, and S. In stage
24
, the normalized value is rounded according to nearest rounding modes. If S includes the R bit OR'ed in, round to nearest (even) is given by the equation:
RTN=G
(
L+S
).   (3)
If the rounding performed in stage
24
produces an overflow, the result is post-normalized (right-shifted) in stage
26
.
As can be seen from the description of pipeline
10
, floating point addition is quite complicated. This operation is quite time-consuming, also, if performed as shown in FIG.
2
: stage
14
(alignment) requires a shift, stage
18
requires a full add, stage
20
(recomplementation) requires a full add, stage
22
requires a shift, and stage
24
(rounding) requires a full add. Consequently, performing floating point addition using p

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Floating point addition pipeline including extreme value,... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Floating point addition pipeline including extreme value,..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Floating point addition pipeline including extreme value,... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2869116

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.