Maximum a posteriori probability decoding method and apparatus

Pulse or digital communications – Receivers – Particular pulse demodulator or detector

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C714S794000, C714S795000

Reexamination Certificate

active

06563890

ABSTRACT:

TECHNICAL FIELD
This invention relates to a maximum a posteriori probability (MAP) decoding method and to a decoding apparatus that employs this decoding method. More particularly, the invention relates to a maximum a posteriori probability decoding method and apparatus for calculating backward probability and then calculating forward probability in maximum a posteriori probability decoding, thereby reducing amount of memory used.
BACKGROUND ART
Error correction codes, which are for the purpose of correcting errors contained in received information or in reconstructed information so that the original information can be decoded correctly, are applied to a variety of systems. For example, error correction codes are applied in cases where data is to be transmitted without error when performing mobile communication, FAX or other data communication, and in cases where data is to be reconstructed without error from a large-capacity storage medium such as a magnetic disk or CD.
Among the available error correction codes, it has been decided to adopt turbo codes (see the specification of U.S. Pat. No. 5,446,747) for standardization in next-generation mobile communications. Maximum a posteriori probability decoding (MAP decoding) manifests its effectiveness in such turbo codes. A MAP decoding method is a method of decoding that resembles Viterbi decoding.
(a) Viterbi Decoding
Using a kth item of data of encoded data obtained by encoding information of information length N, Viterbi decoding selects, for each state prevailing at the moment of input of the kth item of data, whichever of two paths that lead to the state has the smaller error, discards the path having the larger error, thenceforth, and is similar fashion, selects, for each state prevailing at the moment of input of a final Nth item of data, whichever of two paths that lead to the state has the smaller error, and performs decoding using the paths of smallest error among the paths selected at each of the states. The result of decoding is a hard-decision output.
Since the Viterbi decoding method is a method of decoding convolutional codes, convolutional encoding will be described first and then Viterbi decoding will be described.
FIG. 11
shows an example of a convolutional encoder, which has a 2-bit shift register SFR and two exclusive-OR gates EXOR
1
, EXOR
2
. The EXOR
1
outputs the exclusive-OR g
0
between an input and R
1
, and the EXOR
2
outputs the exclusive-OR g
1
(outputs “1” when “1” is odd and outputs “0” otherwise) between the input and R
0
, R
1
. Accordingly, the relationship between the input and outputs of the convolutional encoder and the states of the shift register SFR in an instance where the input data is 01101 are as illustrated in FIG.
12
.
The content of the shift register SFR of the convolutional encoder is defined as the “state”. As shown in
FIG. 13
, there are four states, namely 00, 01, 10 and 11, which are referred to as state a, state b, state c and state d, respectively. With the convolutional encoder of
FIG. 11
, the outputs (g
0
,g
1
) and the next state are uniquely defined depending upon which of the states a through d is indicated by the state of the shift register SFR and depending upon whether the next item of input data is “0” or “1”.
FIG. 14
is a diagram showing the relationship between the states of the convolutional encoder and the inputs and outputs thereof, in which the dashed lines indicate a “0” input and the solid lines a “1” input.
(1) If “0” is input in state a, the output is 00 and the state is a; if “1” is input, the output is 11 and the state becomes c.
(2) If “0” is input in state b, the output is 11 and the state is a; if “1” is input, the output is 00 and the state becomes c.
(3) If “0” is input in state c, the output is 01 and the state becomes b; if “1” is input, the output is 10 and the state becomes d.
(4) If “0” is input in state d, the output is 10 and the state becomes b; if “1” is input, the output is 01 and the state becomes d.
If the convolutional codes of the convolutional encoder shown in
FIG. 11
are expressed in the form of a lattice using the above input/output relationship, the result is as shown in FIG.
15
(
a
), where k signifies the time at which a kth bit is input and the initial (k=0) state of the encoder is a(00). The dashed line indicates a “0” input and the solid line a “1” input, and the two numerical values on the lines are the outputs (g
0
, g
1
). Accordingly, it will be understood that if “0” is input in the initial state a(00), the output is 00 and the state is state a, and that if “1” is input, the output is 11 and the state becomes state c.
Upon referring to this lattice-like representation, it will be understood that if the original data is 11001, state c is reached via the path indicated by the dashes arrows in FIG.
15
(
b
), and that the outputs of the encoder become
11→10→10→11→11
When a convolutional code is decoded, first a decision concerning the received data is rendered. There are two types of decisions, namely a hard decision and a soft decision.
In the case of a hard decision, the decision is rendered on the basis of two quantization levels, i.e., “1” is decided if the detected output level is larger than 0 and “0” is decided if the detected output level is less than 0. If a standardized determination is made is this fashion, an erroneous decision occurs at the spreading portion (the portion indicated by the hatched area) of the skirts of the respective probability density functions shown in FIG.
16
.
A soft decision compensates for this disadvantage of the hard decision. The detected output is quantized at eight levels, as indicated by one example on the left side of
FIG. 16
, likelihood weighting conforming to each level is applied and the results of the decisions are output to a decoder.
With Viterbi decoding, a hard-decision input and a hard-decision output as well as a soft-decision input and a hard-decision output are possible. The former will be described first.
If the ideal error-free state is assumed, in which the hard-decision receive data (g
0
,g
1
) is 11→10→10→11→11, a path indicated by the dashed arrows shown in FIG.
17
(
a
) is obtained. By making the dashed lines “0”s and the solid lines “1”s, the decoded result 11001 can be obtained, as illustrated in FIG.
17
(
b
). In actuality, however, there are many cases where the receive data contains an error. If the fifth bit develops an error so that the hard-decision receive data (g
0
,g
1
) is 11→10→00→11→11, confusion occurs at time k=2 as to whether to branch to 10 or 01 (error count ERR=1). If 10 is construed to be the state and the upper path is selected, state c is reached without confusion at k=3 and k=4. Accordingly, the error count becomes error count ERR=1 on the dashed-arrow path and the decoded result at this time becomes 11001. On the other hand, if 01 is construed to be the state and the lower path is selected at time k=2, then confusion occurs at time k=3 also as to where to branch and total error count ERR=2 is the result. Thereafter, and in similar fashion, paths are selected and, when branching confusion occurs, ERR is counted up. The following results are eventually obtained:
total error count ERR when decoded result is 11001: 1
total error count ERR when decoded result is 11100: 2
total error count ERR when decoded result is 11110: 3
total error count ERR when decoded result is 11111: 3
Accordingly, the decoded result 11001 for which the error count ERR is smallest is selected and output. If this arrangement is adopted, the original data 11001 can be detected correctly even if the receive data is erroneous.
Though the foregoing relates to hard-decision receive data, decoding can be executed in similar fashion in the case of soft-decision receive data as well.
FIG. 18
is a diagram useful in describing decoding in the case of soft-decision receive data. Assume that the soft-decision receive data (g
0
,g
1
) is 1,1

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Maximum a posteriori probability decoding method and apparatus does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Maximum a posteriori probability decoding method and apparatus, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Maximum a posteriori probability decoding method and apparatus will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3061624

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.