DSP instruction for turbo decoding

Error detection/correction and fault detection/recovery – Pulse or data error handling – Digital data error correction

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C714S759000

Reexamination Certificate

active

06725409

ABSTRACT:

FIELD OF THE INVENTION
This invention relates in general to Digital Signal Processor (DSP) Cores and more specifically to a DSP Instruction for Turbo Decoding.
BACKGROUND OF THE INVENTION
Turbo coders are one type of forward error correction (FEC) used in today's communication systems. They are starting to become widely used in many applications such as: wireless handsets, wireless base stations, hard disk drives, wireless LANs, satellites, and digital television Their bit error rate (BER) performance is closer to the Shannon limit as compared with other types of FECs, as is illustrated in FIG.
2
. Turbo coders work on blocks of data called frames. There are two main wireless standards 3GPP and CDMA2000. The frame size for 3GPP ranges from 40 to 5114 bits and the frame size for CDMA2000 ranges from 378 to 20,730 bits. One implementation of a turbo decoder was designed to run in parallel with the TI C64X DSP, with the DSP executing all of the communication algorithm in software except for turbo decoding. The DSP would download a frame of data to the turbo decoder and start the decode. After the decode is complete, the decoder will interrupt the DSP with either a hard or soft interrupt. Next, the DSP retrieves the corrected frame and continues executing the remaining parts of the communication algorithm. The data entering the decoder is soft and is assumed to be quantized to 8 bits in this example. The data exiting the decoder is hard and is binary. The turbo decoder will attempt to find and fix as many errors as possible. Turbo decoders achieve good results by iteratively decoding the data in the frame many times. Typical number of iterations can range from 1 to 100. Typically, the results get better at each iteration until the optimum solution is obtained.
An illustrative example of wireless handset or base station is illustrated in
FIG. 1
, wherein digital hard data is modulated and transmitted from the transmitter portion of the wireless base station and soft data is received at the receiver portion of the wireless base station. As illustrated, noise is introduced between the transmitter and receiver and as a result, errors in the received data make the received data soft, i.e., “0.9”, “−0.2”, instead of the transmitted modulated data “1” or “−1”. The encoder is typically located in a transmitter of a wireless base station, for example, where the decoder is typically located in the receiver base station.
An example of a rate ⅓ parallel concatenated encoder is shown in FIG.
3
. The encoder illustrated in
FIG. 3
is a rate “⅓” parallel concatenated encoder due to one input stream and three output streams. The “I” block in
FIG. 3
is an interleaver which randomly scrambles the information bits to decorrelate the noise for the decoder. Included in
FIG. 3
are two Recursive Systematic Convolutional (RSC) encoders running in parallel. The interleaver located in the encoder scrambles the information in the same way that the interleaver located in the decoder (illustrated in
FIG. 6
) must unscramble the information. Therefore the scrambling can take any form or use any algorithm as long as both the encoder and the decoder use the same scrambling method.
The functional block diagram of the turbo decoder is shown in FIG.
6
. The frame entering the decoder contains systematic and parity symbols x′, p′
0
and p′
1
, respectively. These symbols are scaled once by the DSP and stored in separate memories within the turbo decoder. Turbo decoding achieves an error performance close to the Shannon limit. The performance is achieved through decoding multiple iterations. Each iteration results in additional performance and additional computational delay. Turbo codes consist of a concatenation of convolutional codes, connected by an interleaver, with an iterative decoding algorithm. The iterative decoder generates soft decisions from a maximum-a-posteriori (MAP) block. Each iteration requires the execution of two MAP decodes to generate two sets of extrinsic information. The first MAP decoder uses the non-interleaved data as its input and the second MAP decoder uses the interleaved data. The frame of input data entering the decoder contains systematic x′ and parity symbols p′
0
p′
1
. There are N of these symbols and they are soft (not binary). The symbols are scaled once and stored in memory. The scaled symbols are labeled &Lgr;(x), &Lgr;(p
0
) , and &Lgr;(p
1
)&tgr; in the above figure. These inputs are constant for the entire decode of that block of data.
The input to the upper MAP decoder is &Lgr;(x), &Lgr;(p
0
) and A
2
. A
2
. is the apriori information from the lower MAP decoder. The output of the upper MAP decoder is the first extrinsic or W
1
. W
1
is interleaved to make A
1
. The input to the lower MAP decoder is &Lgr;(x), &Lgr;(p
1
) and A
1
. The output of the lower MAP decoder is the second extrinsic or is W
2
. W
2
is deinterleaved to make A
2
. This completes one iteration.
The map decoder function is to determine the logarithm of likelihood ratio (LLR). This is commonly called the extrinsic and labeled as W
1
and W
2
in FIG.
6
. The extrinsic associated with each decoded bit x
n
is
W
n
=
log

Pr

(
x
n
=
1
|
R
1
n
)
Pr

(
x
n
=
0
|
R
1
n
)
where R
1
n
=(R
0
, R
1
, . . . R
n−1
) denotes the received symbols as received by the decoder. The MAP decoder computes the a posteriori probabilities:
Pr

(
x
n
=
i
|
R
1
n
)
=
1
Pr

(
R
1
n
)


(
x
n
=
i
,
S
n
=
m
,
S
n
-
1
=
m

)
Here S
n
refers to the state at time n in the trellis of the constituent convolutional code. The code rate ⅓ encoder of
FIG. 4
trellis is shown in FIG.
5
.
FIG. 3
shows the two encoders of
FIG. 4
in which the second parity is punctured (or not used), connected in parallel.
FIG. 4
illustrates the logic gates and registers. The encoder has a code rate of “⅓” because there are three outputs for one input. The 3 bit representations to the far left of the trellis represent the values stored within the three registers, one bit for each register, respectively. The number just to the right of those three bit representations are the states, S
n
, within the trellis of which there are 8 possible as there are three registers which can be either a ‘1’ or a ‘0’ bit, i.e. 2
3
=8. The trellis depicts the output of the RSC encoder in dependence upon the initial state, S
n
, of the encoder, the values which are stored in the registers, and the input bit. For example, if the initial state of the encoder is state ‘0’ and if all the registers have a “0” located within and the input bit is a “0”, then as illustrated in the trellis of
FIG. 5
, the output will be “000”, representing the systematic bit and the two parity bits, respectively. As another example, if the decoder is in state “5” and the registers store “101” respectively and the input bit is a 1, the output is “100”.
The terms in the summation can be expressed in the form

Pr
(
x
n
=l, S
n
=m, S
n−1
=m
′)=&agr;
n−1
(
m
′)&ggr;
1
n
(
m′,m
)&bgr;
n
(
m
)
The following simplified equation is used to calculate &agr;, &bgr; and the a posteriori probability(APP) of the bit x
k
:
F
=ln[
e
A
+e
B
]
This equation will be called the exponent logarithm equation. For an eight state code, the exponent logarithm equation is executed 8(N+3)times in the generation of both alpha and beta. The 3 in the (N+3) is the extra processing associated with the 3 extra tail bits. The exponent logarithm equation is executed 8N times in the generation of the extrinsic. Table 1 lists the number of exponent logarithm equations which are required for several different sizes of N. These numbers are for a non sliding block implementation and are 10% to 15% greater for a sliding block implementation of the MAP decoder.
TABLE 1
number
number
number
per
per
per
N
&agr;
&bgr;
&ggr;
MAP
iteration
10 iterations
  320
 2,584
 2,584

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

DSP instruction for turbo decoding does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with DSP instruction for turbo decoding, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and DSP instruction for turbo decoding will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3209039

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.