Method and apparatus for efficiently reading and storing...

Error detection/correction and fault detection/recovery – Pulse or data error handling – Digital data error correction

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C375S341000

Reexamination Certificate

active

06757864

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
This invention generally relates to applications of the Viterbi algorithm. More particularly, the present invention relates to a novel method and apparatus for storing and retrieving state metrics in order to enhance the performance of high-rate Add-Compare-Select (ACS) butterfly operations in Viterbi implementations.
2. Description of Related Art
The Viterbi algorithm was first introduced in 1967 as a method for decoding convolutionally encoded signals. Since its introduction, the algorithm has gained wide acceptance in the fields of data communications, data recording, and digital signal processing. The algorithm has been used successfully in a variety of digital estimation applications, including the reduction of recording errors in storage media, the removal of intersymbol interference, and the enhancement of character and text recognition.
As such, the Viterbi algorithm has become the foremost method for the error-correction decoding of convolutionally encoded data. For such applications, the Viterbi algorithm determines, based on a series of observations, the path with the smallest error metric that traverses a trellis typifying all possible encoder states. The sequence of states along this “shortest path” corresponds to the sequence mostly likely generated by the convolutional encoder.
FIG. 1A
illustrates a typical convolutional encoder. This convolutional encoder
100
comprises an 8-bit tapped shift register
110
and a pair of exclusive OR-type summers
120
that transform a sequence of bits from an input bit stream U(D)
105
into a paired sequence
125
of output code symbols C
0
(D), C
1
(D). In particular,
FIG. 1A
demonstrates the example of a rate ½ code which generates a set of two output coding symbols C
0
(D), C
1
(D)
125
for each bit inputted from input bit stream U(D)
105
. It is to be noted that the specific code rate and configuration of the convolutional encoder
100
shown are merely illustrative and in no way limit the operation or scope of the various embodiments of the invention. As such, different code rates, such as ⅓ or ¾, for example, may be used in conjunction with embodiments of the invention as described below.
Encoder
100
generates each output code symbol pair C
0
(D), C
1
(D) of sequence
125
by shifting and exclusive-OR summing the input bit stream U(D)
105
according to the particular shift-register configuration specified by generator code polynomials G
0
(D), G
1
(D). In this case,
FIG. 1A
depicts a configuration corresponding to the rate ½ generator code polynomial G
0
(D)=1⊕D
2
⊕D
4
⊕D
7
. The coefficients of polynomial G
0
(D) are convolved with input bit stream U(D)
105
to generate output convolutional code symbol C
0
(D) of sequence
125
. Similarly,
FIG. 1A
also shows a configuration that corresponds to the rate ½ generator code polynomial G
1
(D)=1⊕D
2
⊕D
5
, whose coefficients are convolved with input bit stream U(D)
105
to generate output convolutional code symbol C
1
(D) of sequence
125
.
The constraint length K of encoder
100
is one more than the number of delay elements in shift register
110
. For encoder
100
, for example, constraint length K equals 9. For each data bit of input bit stream U(D)
105
inputted into encoder
100
, the output code symbol pair C
0
(D), C
1
(D) of sequence
125
may depend on the inputted bit as well as the previous K−1 input bits. Therefore, encoder
100
produces output code symbol pairs that are capable of spanning 2
K−1
possible encoder states.
In a typical communication system, the output code symbol pairs C
0
(D), C
1
(D) of sequence
125
are subsequently modulated and transmitted over a noisy channel (not shown). A decoder eventually receives the noisy convolutionally encoded data stream and employs the Viterbi algorithm, which exploits the properties of convolutional codes to ultimately determine the input bit stream U(D)
105
.
One advantage of convolutional codes is their highly repetitive structure, which provides for a symmetrical code tree. Such symmetry reduces the number of states that need to be evaluated in locating the most probable path. Moreover, in decoding such a symmetrical code, only the most probable local path leading into each of the 256 possible encoder states is of interest. All other paths may be discarded from further consideration, because the most probable global path through a state must necessarily include the most probable local path through that state. (Note that in some applications of the Viterbi algorithm, the decision as to which local path is most probable may be deferred until information relating to subsequent states is available.)
The Viterbi decoder relies on these code properties to function as a finite state machine having a limited set of state transitions. The decoder hypothesizes each of the 2
K−1
possible encoder states and determines the probability that the encoder transitioned from each of those states to each of the next set of 2
K−1
possible encoder states. In this case, the transition probability is based on observations which are obtained from the received noisy convolutionally encoded data stream.
The probability of each state transition is expressed by a quantity, referred to as a metric, which represents a distance (e.g., in code space) between that state transition and what was actually observed at that point in the input data stream. This distance may be expressed as, for example, a Hamming distance, a Euclidean distance, or a negative logarithm of a probability value, depending on the particular application. Clearly, the smaller the metric, the higher the probability of occurrence. There are two types of metrics: state metrics and branch metrics. The state metric represents the relative probability that the transmitted set of code symbols passed through a particular state. The branch metric represents the conditional probability that the transition from a particular source state to a particular target state was transmitted (assuming that the source state was correct).
The Viterbi algorithm has been implemented efficiently by employing an Add-Compare-Select (ACS) unit
150
, as illustrated in FIG.
1
B. The ACS unit
150
calculates the target state metric values and also characterizes the relationships between the source and target states by virtue of ACS butterfly operations.
FIG. 2
depicts a single ACS butterfly operation
155
, which evaluates the only possible state transitions that could have occurred for two particular adjacent source states in encoder
100
. This limitation is partly due to the fact that, at any given time, the state of encoder
100
is the encoder's previous state right-shifted by 1 bit. The next (right-shifted) information bit determines which transition is made from a source state and will appear as the most significant bit (MSB) of the target state. For a binary data stream, there are only two possible target states that a source state can transition to. Thus, as evidenced by
FIG. 2
, encoder
100
can only transition from source state “x0” to target state “0x” or “1x” and from source state “x1” to target state “0x” or “1x”, depending on the value of the inputted data bit of bit stream U(D)
105
. In this figure, and elsewhere, notations “x0” and “x1” indicate that the least significant bit (LSB) of the source state is “0” and “1”, respectively, while the upper bits are represented by “x”; and notations “0x” and “1x” indicate that the MSB of the target states are “0” or “1”, respectively, while the lower bits are represented by “x”. The term “x” represents the same value (e.g., a 7-bit value) whether it is included in the number of a source state or of a target state.
FIG. 2
also reveals that each pair of transitions from the source states to the target states generates a hypothesized pair of code symbols H
0
(D), H
1
(D) or {overscore (H)}
0
(D), {overscore (H)}
1
(D). In fact, when the most likely transitions ar

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Method and apparatus for efficiently reading and storing... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Method and apparatus for efficiently reading and storing..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and apparatus for efficiently reading and storing... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3302364

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.