Error detection/correction and fault detection/recovery – Pulse or data error handling – Digital data error correction
Reexamination Certificate
2002-06-05
2004-04-06
Dildine, R. Stephen (Department: 2133)
Error detection/correction and fault detection/recovery
Pulse or data error handling
Digital data error correction
Reexamination Certificate
active
06718504
ABSTRACT:
COPYRIGHT
A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all copyright rights whatsoever.
BACKGROUND OF THE INVENTION
1. Field of the Invention
This invention relates generally to data processing, and more particularly to the processing of algorithms in software that benefit from the efficient implementation of forward and backward butterfly operations used, for example, in Maximum a posteriori (MAP) decoding. Such exemplary MAP decoding is used in the processing of parallel concatenated codes (Turbo codes) and serial concatenated codes.
2. Description of Related Technology
Parallel and serial concatenated codes are formed from a data sequence that is concatenated with a sequence of output bits from two or more constituent encoders, e.g., convolutional encoders. Turbo codes correspond a specific type of parallel concatenated code. However, within this application, it is to be understood that where applicable, discussions referring to “Turbo codes” can be extended more generally to both parallel and serial concatenated codes. Embodiments involving parallel concatenated codes and more specifically Turbo codes are developed herein by way of example only.
The use of Turbo codes for transmission of data over a noisy channel was first introduced in C. Berrou, A. Glavieux, and P. Thitimajshima, “
Near Shannon limit error
-
correcting coding and decoding: Turbo codes
”, Proc. of 1993 Int. Conf. Comm., pp. 1064-1070. This reference is referred to as the “Berrou reference” hereinafter. Turbo codes provide bit error rates near Shannon's theoretical limit but add significant complexity to the receiver's decoder. Turbo codes are used for forward error correction in several important communication standards such as, inter alia, third-generation partnership project (hereafter, 3GPP) cellular communications standards. Consequently much effort has been applied to develop efficient Turbo decoder implementations.
MAP (maximum a posteriori) based decoders are widely used within Turbo decoder implementations and require significant data processing. A MAP decoder determines a sequence that minimizes a symbol error rate as opposed to finding a maximum-likelihood sequence as determined using the more common Viterbi algorithm. The MAP decoder algorithm is described in Bahl, L. R. et al., “
Optimal Decoding of Linear Codes for Minimizing Symbol Error Rate
,” IEEE Transactions on Information Theory, March 1974, pp. 284-287, hereinafter called the “Bahl reference.” The MAP decoder described in the Bahl reference is often called the “BCJR algorithm” in recognition of its authors. While the MAP decoder is more costly than the Viterbi algorithm, it provides an information sequence known as an extrinsic sequence that is needed by Turbo decoders. Two MAP decoders configured in a feedback configuration are employed within a Turbo decoder. The processing associated with MAP decoders accounts for the bulk of the computational load in Turbo decoding.
Most practical implementations perform computations using logarithmic representations of probability information and are known as Log-MAP decoders. A decoder known as the Max-Log-MAP decoder uses a mathematical approximation to simplify the calculations involved and to thereby reduce the overall complexity of the decoder. Max-Log-MAP decoders are discussed in “
Efficient Software Implementation of the Max
-
Log
-
MAP Turbo decoder on the StarCore SC
140
DSP
”, A. Chass, A. Gubeskys, and G. Kutz, ICSPAT 2000 and Motorola Application Note, hereinafter referred to as the “Chass reference.” The Max-Log-MAP decoder performance is slightly reduced compared to the Log-MAP but is more commonly implemented due its decreased computational complexity. The Max-Log-MAP decoder performance can be improved by the addition of a correction term. A Max-Log-MAP decoder that makes use of this correction is known as a Max*-Log-MAP decoder. Max*-Log-MAP decoders are discussed in Michel, H. and When, N. “
Turbo
-
Decoder Quantization for UMTS
” IEEE Communications letters, Vol. 5, Number 2, February 2001, hereinafter called the Michel reference. The exemplary embodiment of the invention performs efficient Max*-Log-MAP decoding in software using of a customized processor designed to efficiently implement operations involved in various MAP decoding algorithms. Most of the computational operations required to perform MAP based decoding involve forward (alpha) metric updates, backward (beta) metric updates and the Log Likelihood Ratio (hereafter, LLR) calculations.
FIG. 1
illustrates a prior art block diagram of a rate ⅓ Turbo encoder 100 as used in a transmitting device. An input data sequence u(k)
101
(typically binary valued) is directly coupled to an output coupling
103
to produce a systematic data subsequence x(k) (i.e., x(k)=u(k)). The input sequence u(k) is also coupled to a first convolutional encoder
105
to produce a first parity information subsequence y
1
(k)
107
. The input sequence u(k) is also coupled to a pseudo random interleaver
109
whose output is coupled to a second convolutional encoder
111
to produce a second parity information subsequence y
2
(k)
113
. The output of the rate ⅓ Turbo encoder
100
is a sequence containing the three subsequences x(k), y
1
(k), and y
2
(k).
The Turbo encoder of
FIG. 1
involves relatively simple logic processing and is usually implemented using Finite State Machine (FSM) controlled hardware. The encoded data stream is transmitted over a noisy channel and is received at a receiving device as an error-prone data stream comprising error-prone systematic and parity information subsequences. A Turbo decoder is used to operate on the received error-prone subsequences in order to produce an error-corrected estimate of input data sequence, u(k).
In many embodiments a rate ½ Turbo decoder is used instead of the aforementioned rate ⅓ Turbo decoder. The rate ½ Turbo decoder discards every other element of the subsequences y
1
(k)
107
, and y
2
(k)
113
, so that the encoder's output sequence contains one parity bit for each systematic bit. This process of decimating the parity sequences is known to those skilled in the art as “puncturing.”
A Turbo decoder
200
designed according to the most commonly employed Turbo decoding scheme is shown in FIG.
2
. At the Turbo decoder
200
, the input data subsequences correspond to error-prone versions of the transmitted subsequences. This is because the Turbo decoder generally only has access to the transmitted information after it has been received through a noisy channel. The received error-prone data subsequence x(k)
202
, and the received error-prone first parity subsequence y
1
(k)
204
are coupled into a first Soft Input Soft Output (SISO) MAP decoder
206
. Also coupled into the first MAP decoder
206
is a feedback sequence involving a priori log likelihood information, &lgr;
in
(k), output from a deinterleaver
208
. The output from the first SISO MAP decoder
206
, &lgr;
out
(k)
207
, is coupled to an interleaver
210
which generates a set of a priori information that is coupled to a second SISO MAP decoder
212
. The second SISO MAP decoder
212
also takes as input an error-prone parity data subsequence Y
2
(k)
214
and the error-prone systematic data x(k)
202
after passing through an interleaver
216
. As is known in the art, the deinterleaver
208
, and the interleavers
210
and
216
use the same interleaving function as used in the encoder
100
. The output of the second SISO MAP decoder
212
is a second log likelihood data output sequence, &lgr;
out
(k)
213
. The sequence &lgr;
out
(k)
213
, like the other data sequences, includes a corresponding element for each bit index k into the input data block. The number k preferably rang
Coombs Robert
Talbot Jonathan
Worm Alexander
ARC International
Dildine R. Stephen
Gazdzinski & Associates
LandOfFree
Method and apparatus for implementing a data processor... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Method and apparatus for implementing a data processor..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and apparatus for implementing a data processor... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3212822