Error detection/correction and fault detection/recovery – Pulse or data error handling – Digital data error correction
Reexamination Certificate
2000-02-14
2002-10-01
Decady, Albert (Department: 2133)
Error detection/correction and fault detection/recovery
Pulse or data error handling
Digital data error correction
Reexamination Certificate
active
06460160
ABSTRACT:
FIELD OF THE INVENTION
The present invention relates generally to communication systems and more particularly to error control in decoding.
BACKGROUND OF THE INVENTION
Error control in communication systems provides a number of benefits. As part of error control, it is desirable to provide soft information to a decoder for use in making decisions. This soft information often takes the form of a vector a of non-negative soft-values indicating the reliability, or likelihood of correctness, of the received bits in a hard-detected vector r. Typically, a soft-decision decoder performs an estimate of the transmitted codeword on the basis of both r and a, as will be understood by those skilled in the art.
Soft-decision decoding of block codes can employ various techniques. For example, such techniques can employ trellis-based algorithms, syndrome-based algorithms, or Chase-based algorithms.
Trellis-based algorithms rely on an assumption that every linear block code can be represented by, and therefore decoded on, a trellis. An efficient decoding algorithm based on a trellis requires a “good” trellis representation, that is, one having a relatively small number of states, vertices, and/or edges. Disadvantageously, such a trellis representation may be unknown or difficult to discover for a particular code.
As another shortcoming, trellis decoding of block codes disadvantageously requires storage of the entire time-varying trellis structure, when, for most time-invariant convolutional codes, only a single section of the trellis structure is needed. Further, the amount of memory required for this extra storage can undesirably increase exponentially with code size.
As another example, a syndrome-based algorithm is employed in, for instance, coset-based soft-decision decoding, also referred to as “soft-syndrome decoding.” Soft syndrome decoding identifies the most-likely error pattern from the coset of the received hard-detected vector. As one shortcoming, decoders that employ syndrome-based algorithms generally suffer from relatively high complexity at maximal decoding demand.
In a further example, Chase-based-algorithm decoders employ an underlying hard-decision decoder to produce several decoding choices, the best of which is selected as the Chase decoder output. The number of hard-decision decoding attempts is equal to the number of test patterns employed, where each test pattern is used to create a vector that is hard-decision decoded. With a sufficient number of test patterns, a Chase-based soft-decision decoder can provide a relatively high degree of decoding accuracy. As one shortcoming of the Chase-based soft-decision decoder, the entire number of test patterns must be employed to obtain the relatively high degree of decoding accuracy.
Different forms of the Chase decoder usually employ different techniques to select a minimum number of test patterns, for a given decoding performance. For example, a Chase-2
L
decoder permutes the bits of the received vector in its L least reliable positions. These 2
L
permuted vectors are hard-decision decoded, with each decoding iteration successfully yielding a codeword, or an indication of decoding failure. The Chase-2
L
decoder selects the codeword having the smallest metric, where the metric is computed by comparing the received vector to the codeword, and summing the reliability values of the bit positions that are different.
Turning to
FIG. 1
as an illustrative representation of a Chase-2
L
decoder of the prior art, Chase-2
L
decoder
100
can employ exemplary logic
102
. Logic
102
employs conventional binary counting, for example, to identify iterations (e.g., loops)
108
. The test pattern TP used at STEP
120
to permute the received vector r of input
111
, is an L-bit number where each bit position of the TP is in a one-to-one correspondence with the L indices of least reliable soft values in vector a of input
111
. A 1 in the TP corresponds to flipping the bit position, while a 0 leaves the bit position unchanged. The initial TP is all 0s, meaning that the received vector r at STEP
120
initially is directly hard-decision decoded. The TP on the 2
L
th iteration is all 1s, meaning that the L least reliable indices are all flipped before performance of that iteration
108
of hard-decision decoding through STEP
120
.
For each successful hard-decision decoding (as determined at STEP
124
), still referring to
FIG. 1
, STEP
160
computes a temporary metric by summing the soft values corresponding to the bits changed between the received vector and the decoded codeword, either from 1s in the TP or the error locations {x}. The notation sum(a
TP+{x}
) at STEP
160
signifies that the indices corresponding to both 1s in the TP and an error in {x}, are not included in the sum since they do not result in an overall change between the received vector r and the decoded codeword.
Referring further to
FIG. 1
, STEP
162
stores the index of the smallest metric in i
b
, and STEP
162
stores the errors found during that decoding iteration
108
in {errors}. At the end of 2
L
iterations
108
, STEP
164
declares a decoding failure if no successful decodings occurred (as indicated at STEP
166
by i
b
remaining at its initial value of −1). In the event of decoding success (to proceed to STEP
150
), STEP
168
reconstructs the best decoded codeword from the received vector r by flipping the bits corresponding to the TP and {errors}. Any bits flipped in STEP
168
by both TP and {errors}, are effectively unchanged.
As one shortcoming of logic
102
, again referring to
FIG. 1
, the binary counting for indexing consecutive instances of iterations
108
, undesirably often requires multiple bits to change. As another shortcoming, logic
102
at maximal decoding must perform all iterations.
In another approach, test patterns are eliminated based on the distance properties of the code employed in the test patterns. In this example, error pattern EP comprises the difference between a previously decoded codeword and the received vector, FTP comprises a future test pattern (e.g., with 1s in some of the L least reliable positions and 0s elsewhere, in contrast to the L-bit implementation format of logic
102
, FIG.
1
), W(EP) comprises the Hamming weight (number of 1s) of EP except in the L least reliable positions, and W(EP,FTP) comprises the Hamming distance (number of different positions) between EP and FTP in the L least reliable positions. The distance between the codeword corresponding to EP and any codeword that FTP generates, is at most W(EP,FTP)+W(EP)+t where t is the maximum error-correction capability of the hard-decision decoder. Therefore, if the following Test (1) is satisfied, then the FTP cannot generate a new codeword.
W
(
EP
)+
W
(
EP,FTP
)<
d
min
−t
(1)
In Test (1), d
min
comprises a minimum distance that is a property of the code of the test pattern, as will be appreciated by those skilled in the art. Eliminating an FTP that satisfies Test (1) will not incur any performance loss. Test (1) can serve to significantly reduce the number of hard-decision decodings.
As one shortcoming, Test (1) entails a non-negligible cost of implementation that is dominated by the determination whether the error pattern EP occurred in the L least reliable positions. As another shortcoming, the worst-case/maximum cost when no test patterns can be eliminated from the decoding through Test (1), is larger than before. For instance, this disadvantageously increased cost can comprise a fifty percent increase in processing load, depending on the particular hard-decision decoder.
It remains desirable to provide enhancements for error correction in decoding.
REFERENCES:
patent: 6134694 (2000-10-01), Uebayashi et al.
Kuomoto et al., A sufficient condition for ruling out some useless test error patterns in iterative decoding algorithms, IEICE Trans., p. 321-326, Feb. 1998.*
Fragiacomo et al., Novel near maximum likelihood soft decision de
Chase Shelly A
De'cady Albert
Haas Kenneth A.
Motorola Inc.
LandOfFree
Chase iteration processing for decoding input data does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Chase iteration processing for decoding input data, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Chase iteration processing for decoding input data will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2965148