Coded data generation or conversion – Digital code to digital code converters – To or from code based on probability
Reexamination Certificate
2001-06-07
2003-02-25
JeanPierre, Peguy (Department: 2819)
Coded data generation or conversion
Digital code to digital code converters
To or from code based on probability
C714S746000, C714S786000, C714S758000
Reexamination Certificate
active
06525680
ABSTRACT:
BACKGROUND OF THE INVENTION
1. Field of the Invention
This invention relates to a decoder and a decoding method adapted to soft-output decoding.
2. Related Background Art
There have been many studies in recent years for minimizing symbol error rates by obtaining soft-outputs for the decoded outputs of inner codes of concatenated codes or the outputs of recursive decoding operations using a recursive decoding method. There have also been studies for developing decoding methods that are adapted to producing soft-outputs. For example, Bahl, Cocke, Jelinek and Raviv, “Optimal decoding of linear codes for minimizing symbol error rates”, IEEE Trans. Inf. Theory, Vol. It-20, PP. 284-287, March 1974 describes an algorithm for minimizing symbol error rates when decoding predetermined codes such as convolutional codes. The algorithm will be referred to as BCJR algorithm hereinafter. The BCJR algorithm is designed to output not each symbol but the likelihood of each symbol as a result of decoding operation. Such an outputs is referred to as soft-output. The BCJR algorithm will be discussed below firstly by referring to FIG.
1
. Assume that digital information is put into convolutional codes by encoder
201
of a transmitter (not shown), whose output is then input to a receiver (not shown) by way of a memoryless channel
202
having noises and decoded by decoder
203
of the receiver for observation.
The M states (transitional states) representing the contents of the shift registers of the encoder
201
are denoted by integer m (m=0, 1, . . . , M-1) and the state at time t is denoted by S
t
. If information of k bits is input in a time slot, the input at time t is expressed by i
t
=(i
t1
, i
t2
, . . . , i
tk
) and the input system is expressed by I
1
T
=(i
1
, i
2
, . . . , i
T
). If there is a transition from state m′ to state m, the information bits corresponding to the transition are expressed by i (m′, m)=(i
1
(m′, m), i
2
(m′, m), . . . , i
k
(m′, m)). Additionally, if a code of n bits is output in a time slot, the output at time t is expressed by x
t
=(x
t1
, x
t2
, . . . , x
tn
) and the output system is expressed by X
1
T
=(x
1
, x
2
, . . . , x
T
). If there is a transition from state m′ to state m, the information bits corresponding to the transition are expressed by x (m′, m)=(x
1
(m′, m), x
2
(m′, m), . . . , x
k
(m′, m)).
The encoder
201
starts to produce convolutional codes at state S
0
=0 and ends at state S
T
=0 after outputting X
1
T
. The inter-state transition probabilities P
t
(m|m′) of the above encoder are defined by formula (1) below;
P
t
(
m|m′
)=
Pr{S
t
=m|S
t−1
=m′}
(1)
where Pr {A|B} at the right side of the above equation represents the conditional probability with which A occurs under the conditions in which B occurs. The transition probabilities P
t
(m|m′) are equal to the probability Pr {i
t
=i} that input i
t
at time t is equal to i when a transition from state m′ to state m occurs with input i as shown by formula (2) below.
P
t
(
m|m′
)=
Pr{i
t
=i}
(2)
The memoryless channel
202
having noises receives X
1
T
as input and outputs Y
1
T
. If a received value of n bits is output in a time slot, the output at time t is expressed by y
1
=(y
t1
, y
t2
, . . . , y
tk
) and the output system is expressed by Y
1
T
=(y
1
, y
2
, . . . , y
T
). Then, the transition probabilities of the memoryless channel
202
having noises can be defined for all values of t (1≦t≦T) by using the transition probability of each symbol, or Pr {y
j
|x
j
}.
Pr
⁢
{
Y
1
t
⁢
|
⁢
X
1
t
}
=
∏
j
=
1
t
⁢
⁢
Pr
⁢
{
y
j
⁢
|
⁢
x
j
}
(
3
)
Now, &ggr;
tj
is defined by formula (4) below as the likelihood of input information at time t when Y
1
T
is received, or the soft-output to be obtained.
λ
ij
=
Pr
⁢
{
i
ij
=
1
⁢
|
⁢
Y
1
T
}
Pr
⁢
{
i
tj
=
0
⁢
|
⁢
Y
1
T
}
(
4
)
When the BCJR algorithm, probabilities &agr;
t
, ⊕
t
and &ggr;
t
are defined respectively by means of formulas (5) through (7) below. Note that Pr {A; B} represents the probability with which both A and B occur.
&agr;
t
(
m
)=
Pr{S
t
=m;Y
1
T
} (5)
&bgr;
t
(
m
)=
Pr{Y
t+1
T
|S
t
=m}
(6)
&ggr;
t
(
m′,m
)=
Pr{S
t
=m;y
t
|S
t−1
=m′}
(7)
Now, the probabilities of &agr;
t
, &bgr;
t
and &ggr;
t
will be described by referring to
FIG. 2
, which is a trellis diagram, or a state transition diagram, of the encoder
201
. Referring to
FIG. 2
, &agr;
t−1
corresponds to the passing probability of each state at time t-1 as computed on a time series basis from the state of starting the coding S
0
=0 by using the received value and &bgr;
t
corresponds to the passing probability of each state at time t as computed on an inverse time series basis from the state of ending the coding S
T
=0 by using the received value, while &ggr;
t
corresponds to the reception probability of the output of each branch showing a transition from a state to another at time t as computed on the basis of the received value and the input probability.
Then, the soft-output &ggr;
tj
is expressed in terms of the probabilities &agr;
t
, &bgr;
t
and &ggr;
t
in a manner as shown in formula (8) below.
λ
ij
=
∑
m
′
,
m
⁢
⁢
i
j
⁡
(
m
′
,
m
)
=
1
⁢
α
t
⁡
(
m
′
)
⁢
γ
t
⁡
(
m
′
,
m
)
⁢
β
t
⁡
(
m
)
∑
m
′
,
m
⁢
⁢
i
j
⁡
(
m
′
,
m
)
=
0
⁢
α
t
⁡
(
m
′
)
⁢
γ
t
⁡
(
m
′
,
m
)
⁢
β
t
⁡
(
m
)
(
8
)
Meanwhile, formula (9) below holds true for t=1, 2, . . . , T.
α
t
⁡
(
m
)
=
∑
m
′
=
0
M
-
1
⁢
⁢
α
t
-
1
⁡
(
m
′
)
⁢
γ
t
⁡
(
m
′
,
m
)
(
9
)
Similarly, formula (10) holds true also for t=1, 2, . . . , T.
β
t
⁡
(
m
)
=
∑
m
′
=
0
M
-
1
⁢
⁢
β
t
+
1
⁡
(
m
′
)
⁢
γ
t
+
1
⁡
(
m
,
m
′
)
(
10
)
where &bgr;
T
(0)=1, &bgr;
T
(m)=0(m≠0)
Finally, formula (11) holds true for &ggr;
t
.
γ
t
⁡
(
m
′
,
m
)
=
{
P
t
⁡
(
m
⁢
|
⁢
m
′
)
·
Pr
⁢
{
y
t
⁢
❘
⁢
x
⁡
(
m
′
,
m
)
}
=
⁢
Pr
⁢
{
i
t
=
i
⁡
(
m
′
,
m
)
}
·
Pr
⁢
{
y
t
⁢
|
⁢
x
⁡
(
m
′
,
m
)
}
:
*
⁢
1
⁢
0
:
*
⁢
2
⁢
(
11
)
:*1 . . . when a transition occurs from m′ to m with input i.
:*2 . . . when no transition occurs from m′ to m with input i.
Thus, for soft-output decoding, applying the BCJR algorithm, the decoder
203
determines the soft-output &ggr;
t
by passing through the steps shown in
FIG. 3
, utilizing the above relationships.
More specifically, in Step S
201
, the decoder
203
computes the probabilities &agr;
t
(m) and &ggr;
t
(m′, m), using the formulas (9) and (11) above, each time it receives y
t
.
Then, in Step S
202
, after receiving all the system Y
1
T
, the decoder
203
computes the probability &bgr;
t
(m) of state m for all values of time t, using the formula (10) above.
Thereafter, in Step S
203
, the decoder
203
computes the soft-output &ggr;
t
at each time t by substituting the values obtained in Steps S
201
and S
202
for the probabilities &agr;
t
, &bgr;
t
and &ggr;
t
in the formula (8) above.
With the above described processing steps, the decoder
203
can carry out the soft-output decoding, applying the BCJR algorithm.
However, the BCJR algorithm is accompanied by a problem that it involves a large volume of
Miyauchi Toshiyuki
Yamamoto Kouhei
Frommer William S.
Frommer & Lawrence & Haug LLP
Jean-Pierre Peguy
JeanGlaude Jean Bruner
Smid Dennis M.
LandOfFree
Decoder and decoding method does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Decoder and decoding method, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Decoder and decoding method will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3156711