Error detection/correction and fault detection/recovery – Pulse or data error handling – Digital data error correction
Reexamination Certificate
2000-02-07
2002-11-05
Decady, Albert (Department: 2133)
Error detection/correction and fault detection/recovery
Pulse or data error handling
Digital data error correction
Reexamination Certificate
active
06477679
ABSTRACT:
FIELD OF THE INVENTION
This invention relates generally to communications systems. More specifically, the present invention relates to methods for decoding data in digital communications systems.
RELATED ART
FIG. 1
 depicts a block diagram of a parallel concatenated recursive systematic convolutional coder 
100
 known in the art. Coder 
100
 falls into the class of coders known as “turbo coders.” Coder 
100
 includes a first constituent encoder 
102
, a second constituent encoder 
104
, a multiplexer 
106
, and a buffer interleaver 
110
. Data to be encoded by coder 
100
 is input to multiplexer 
106
 and to buffer interleaver 
110
. An output of multiplexer 
106
 is input to a recursive convolutional encoder 
108
. Similarly, an output of buffer interleaver 
110
 is input to a recursive convolutional encoder 
108
′. An INTERMEDIATE OUTPUT of recursive convolutional encoder 
108
 is also input to multiplexer 
106
. A control signal, “TERMINATION CONTROL,” selects which input of multiplexer 
106
 is output to convolutional encoder 
108
 and to a systematic first-in-first-out queue (“FIFO”) 
112
. An output of systematic FIFO 
112
 generates a first one of four outputs of coder 
100
, SYSTEMATIC-1. An ENCODER OUTPUT is input to a parity first-in-first-out queue (“FIFO”) 
114
. An output of parity FIFO 
114
 generates a second one of four outputs of coder 
100
, PARITY-1.
Continuing with recursive convolutional encoder 
108
, the output of multiplexer 
106
 is input to an input of an exclusive OR (XOR) gate 
116
. A second input of XOR gate 
116
 receives the INTERMEDIATE OUTPUT of recursive convolutional encoder 
108
. An output of XOR gate 
116
 is input to a first bit latch 
118
. An output of bit latch 
118
 is input to a second bit latch 
120
. An output of bit latch 
120
 is input to an XOR gate 
122
 and to an XOR gate 
124
. A second input of XOR gate 
122
 receives the output of XOR gate 
116
. A second input of XOR gate 
124
 receives the output of bit latch 
118
. An output of XOR gate 
124
 generates the INTERMEDIATE OUTPUT of recursive convolutional encoder 
108
.
Continuing with recursive convolutional encoder 
108
′, the output of buffer interleaver 
110
 generates a third one of four outputs of coder 
100
, SYSTEMATIC-2. In one embodiment of coder 
100
, recursive convolutional encoder 
108
′ is identical to recursive convolutional encoder 
108
. In this case, the final output of coder 
100
, PARITY-2, is generated by an XOR gate 
122
′ (not shown). As described below, recursive convolutional encoder 
108
′ may differ from recursive convolutional encoder 
108
.
Continuing with the operation of coder 
100
, a data stream to be encoded is input to both recursive convolutional encoders. These encoders create redundant information (PARITY-1, PARITY-2) that is transmitted in addition to the original data stream (SYSTEMATIC-1, SYSTEMATIC-2). Buffer interleaver 
110
 reorders the input data stream prior to the operation of encoder 
108
′.
Later, a decoder receiving the data stream uses the systematic and parity data to recreate the most likely original data stream based on the polynomials instantiated in the recursive convolutional encoders. Known methods of decoding turbo coded data streams have at least two disadvantages.
First, the calculation of a possible data stream from its parity and systematic data streams is dependent upon the starting state of the recursive convolutional encoders used by the coder. In a turbo coding scheme, these calculations are made in a forward direction (first data bit to last data bit) and in a backwards direction (last data bit to first data bit) to improve the accuracy of the output. Both of these calculations require knowledge of the initial (or final) state of the recursive convolutional encoders. Otherwise, the accuracy of the data stream will be limited during the initial (or final) bit calculations. In the prior art, it is known to initially set the recursive convolutional encoders to a known state. This scheme addresses the forward calculation problem. However, the determination of the final state is more difficult. Multiplexer 
106
 may be used to input predetermined data into recursive convolutional encoder 
108
 after processing the data stream. This data forces the encoder to a final known state. Unfortunately, buffer interleaver 
110
 rearranges these bits in the data stream prior to input to encoder 
108
′. Consequently, encoder 
108
′ ignores these final inputs. Encoder 
108
′ may include a multiplexer 
106
′ (not shown) analogous to multiplexer 
106
 to force the final state of recursive convolutional encoder 
108
′ to a known state. This latter approach requires that additional systematic and parity data be sent to the encoder from encoder 
108
′ and that this data be processed by the decoder.
Second, a turbo coding scheme decodes a block of data by iteratively processing the data through two sequential maximum-a-posteriori (“MAP”) decoders. The data is interleaved between the two sequential MAP decoders to reflect the effect of buffer interleaver 
110
. During each iteration, one MAP decoder uses a soft decision (intermediate guess) of the other MAP decoder in its current calculation. Specifically, the calculation of a branch metric (&ggr;) is the conditional probability that a particular transition occurred from one state to another state, assuming the starting state was the correct state, and given the received systematic and parity bits. According to known techniques, this calculation requires three operands: (1) the soft decision based on the previous iteration, (2) the probability the received systematic bit is a one or a zero, and (3) the probability the received parity bit is a one or a zero. The requirement of three operands in this repetitive calculation significantly increases the processing requirement of the MAP decoder, whether instantiated in hardware or as a software routine.
REFERENCES:
patent: 6044116 (2000-03-01), Wang
patent: 1022860 (2000-07-01), None
Barbulescu et al., Turbo codes: a tutorial on a new class of powerful error correcting coding schemes, Institute for Telecommunications Research, pg. 1 to 21, Oct. 1998.*
Gracie et al., performance of a low-complexity Turbo decoder and its implementation on a low-cost, 16-bit fixed-point DSP, Communication Research Centre, pg. 1-11, Jul. 1998.*
Sklar, “A Primer on Turbo Code Concepts,” IEEE, pp. 94-102 (1997).
“European Transactions on Telecommunications,” AEI, ETT vol. 8, No. 2, pp. 119-125 (1997).
Benedetto et al., “Design of Parallel Concatenated Convolutional Codes,” IEEE, pp. 591-600 (1996).
Berrou et al., “Near Optimum Error Correcting Coding and Decoding: Turbo-Codes,” IEEE, pp. 1261-1271 (1996).
Berrou et al., “Near Shannon Limit Error -Correcting Coding and Decoding: Turbo-Codes (1),” IEEE, pp. 1064-1070 (1993).
Bahl et al., “Optimal Decoding of Linear Codes for Minimizing Symbol Error Rate,” IEEE, pp. 284-287 (1974).
Viterbi, “Error Bounds for Convolutional Codes and an Asymptotically Optimum Decoding Algortihm,” IEEE, pp. 260-269 (1966).
Such Paula F.
Taipale Dana J.
Chase Shelly A
De'cady Albert
LandOfFree
Methods for decoding data in digital communication systems does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Methods for decoding data in digital communication systems, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Methods for decoding data in digital communication systems will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2974622