Error detection/correction and fault detection/recovery – Pulse or data error handling – Digital data error correction
Reexamination Certificate
2003-01-28
2004-07-06
Decady, Albert (Department: 2133)
Error detection/correction and fault detection/recovery
Pulse or data error handling
Digital data error correction
C714S786000
Reexamination Certificate
active
06760879
ABSTRACT:
BACKGROUND OF THE INVENTION
1. Field of the Invention
The invention relates to turbo decoding methods and systems being adapted for execution of these turbo decoding methods.
2. Description of the Related Technology
Turbo encoding is characterized in that the to be encoded signal u
k
is encoded by a first encoder ENC
1
, resulting in a first encoded signal and an interleaved version of u
k
is encoded by a second encoder ENC
2
, resulting in a second encoded signal. The original to be encoded signal u
k
, the first and second encoded signal are then transmitted. In the transmitted signal one can thus distinguish a sequence of blocks, being said to be encoded signal u
k
, said first and second encoded signal. FIG.
4
(
a
) shows such an encoder set-up. One can state that the complete encoder transforms the original input bits u
k
in output symbols c
k
, comprising of a concatenation of convolutional component codes, being separated by a pseudo random interleaver. It is important to note that in turbo coding reinitialisation of the state of the encoder is essential for the corresponding decoding procedure. Often also part of the input sequence to such an encoder is adapted such that the end state of the encoder is forced to be a particular known state. Because of the fact that the original signal is transmitted uncoded also, one denotes such a coding technique also as a systematic code. Note that a pseudo random interleaver has a mapping function having a substantially irregular pattern.
Turbo decoding is characterized in that the received transmitted code is decoded by a first decoder, resulting in a first decoded signal. Turbo decoding does not stop however after determining said first decoded signal. Instead a second decoder does another decoding step, resulting in a second decoded signal, exploiting the received transmitted code and an interleaved version of the first decoded signal. Afterwards said first decoder performs a further decoding step, exploiting said received transmitted code and an interleaved version of said second decoded signal. This procedure is iteratively and then the final decoded signal is determined. Said first and second decoded signal can be denoted intermediate data elements, as these are not the final decoded signal. Turbo decoding can be described as performing iteratively decoding methods, embedded in the decoders. The decoding method of a first decoder exploits in an iteration data generated by another decoder in the previous iteration. FIG.
4
(
b
) shows such a turbo decoding set-up. Said intermediate data elements are denoded also extrinsic information or a posteriori information. The data element determined in a previous iteration is used as intrinsic information in a next iteration. It is expected that performing said iteration results in intermediate data elements being approximations of a maximum likelihood estimate of the original signal u
k
. Because said first and second decoder exploit signals from each other, only one of said decoders is active at a time, meaning that half of the hardware required in a turbo decoding architecture is idle while performing turbo decoding. Known approaches for solving this is using pipelining, meaning that a sequence of hardware blocks is used for turbo decoding. A first block performs first decoding, a second block performs second decoding, a third block performs again said first decoding and so on. While said third block performs its first decoding step said first block can already start executing its first decoding step on a new sequence. Naturally such an approach requires a lot of hardware.
Reference to turbo decoding and decoding methods used therein are found in: [D. Garrett, M. Stan, “Low Power Architecture of the Soft-Output Viterbi Algorithm”, Proceedings International Symposium on Low Power Electronics and Design (ISLPED'98), Monterey, Calif., Aug. 10-12, 1998, pp. 262-267][O. J. Joeressen, M. Vaupel, H. Meyr, “High-Speed VLSI Architectures for Soft-Output Viterbi Decoding”, Journal of VLSI Signal Processing, 1-12, 1998]. [C. Berrou, A. Glavieux, P. Thitimajshima, “Near Shannon limit error-correcting coding and decoding: Turbo-codes,” Proc. ICC'93, Geneva, Switzerland, May 1993, pp. 1064-1070]. [S. S. Pietrobon, “Efficient Implementation of Continuous MAP Decoders and a Synchronisation Technique for Turbo Decoders”, Int. Symp. on Inform. Theory and its Applications, Victoria, BC, Canada, Sep. 1996, pp. 586-589].
The decoding methods used by the decoders within said turbo decoding set-up are now described shortly. In particular Maximum A Posteriori approaches are discussed. The log-SISO algorithm is chosen as specific algorithm for the description although the invention is not limited hereto. E.g. also Soft-output Viterbi Algorithms can be used. By operating in the logarithmic domain expensive multiplications are avoided. Instead the E-operation is introduced, which can easily be implemented using table look up or approximated by taking the maximum. The extrinsic information &lgr;
k
ext
is calculated based on &agr; and &bgr; state metrics as indicated in formula 1 in which c
1
and C
2
are the output bits for an encoder state transition from s to s′ (FIG.
17
).
λ
k
ext
=
E
x
i
=
1
,
s
→
s
′
⁡
[
δ
k
⁢
(
s
,
s
′
)
]
-
E
x
i
=
0
,
s
→
s
′
⁡
[
δ
k
⁢
(
s
,
s
′
)
]
with &dgr;
k
(s,s′)=&agr;
k
(s)+&bgr;
k
(s′)+c
1
·&lgr;
k
1
+c
2
·&lgr;
k
2
The log likelihood ratios &lgr;
k
i
(for i=1 . . . 2) of the channel symbols y
k
i
are defined as:
λ
k
i
=
log
⁡
[
P
⁢
(
c
k
i
=
1
|
y
k
i
)
P
⁢
(
c
k
i
=
0
|
y
k
i
)
]
After some iterations the decoded bits û
k
are calculated as (&lgr;
k
int
is the intrinsic information):
û
k
=sign [&lgr;
k
int
+&lgr;
k
ext
+&lgr;
k
1
The &agr; and &bgr; metrics are obtained through formula 3 and 4 based on a forward recursion and a backward recursion respectively. They both start in a known initial state at the beginning (for &agr;) or end (for &bgr;) of the block.
α
k
+
1
⁡
(
s
′
)
=
E
s
→
s
′
⁡
[
α
k
⁡
(
s
)
+
c
1
·
λ
k
int
+
c
1
·
λ
k
1
+
c
2
·
λ
k
2
]
β
k
-
1
⁡
(
s
)
=
E
s
→
s
′
⁡
[
β
k
⁡
(
s
′
)
+
c
1
·
λ
k
int
+
c
1
·
λ
k
1
+
c
2
·
λ
k
2
]
In general in these MAP algorithms a computing step and a determining step can be distinguished. Said compute step is characterized by the computation of two vector sequences or state metrics. Said vector sequences are computed via recursions. A forward recursion for determining said first state metrics and a backward recursion for determining said second state metrics are distinguished. Said state metric determination exploits the encoded signal (via &lgr;
k
1
, &lgr;
k
2
) and intermediate data elements &lgr;
INT
, produced by another decoder. Said decoded signal u
k
is being determined by combining said encoded signal (via &lgr;
k
1
), said first state metrics and said second state metrics (via &lgr;
EXT
). Note that the coding process can be seen as a transition in a finite state machine, wherein the register content of the convolution coder, denotes the state of the encoder, which completely determines the behaviour of the coder for the next input bit. One often represents this with a trellis graph, showing state transitions. The state metrics exploited in the decoding process refer in principle back to these encoder state.
The &agr; metrics need to be stored however since the first &lgr;
ext
can only be calculated once the entire forward recursion is finished. This results is a storage of N metrics for all the states, which is unacceptable for most practical interleaver sizes N. A solution to the storage requirement problem of the normal SISO algorithm presented above, is the introduction of sliding windows [S. S. Pietr
Catthoor Francky
Engels Marc
Giese Jochen Uwe
Gyselinckx Bert
Schurgers Curt
Chase Shelly A
Interuniversitair Microelektronica Centrum (IMEC)
Knobbe Martens Olson & Bear LLP
LandOfFree
System and method of turbo decoding does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with System and method of turbo decoding, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and System and method of turbo decoding will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3235817