Efficient iterative decoding

Error detection/correction and fault detection/recovery – Pulse or data error handling – Digital data error correction

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C714S776000, C714S786000

Reexamination Certificate

active

06292918

ABSTRACT:

FIELD OF THE INVENTION
The present invention relates generally to iterative decoding, and specifically to fast iterative decoding of multiple-component codes.
BACKGROUND OF THE INVENTION
Transmission of digital data is inherently prone to interference which may introduce errors into the transmitted data. Error detection schemes have been suggested to determine as reliably as possible whether errors have been introduced into the transmitted data. For example, it is common to transmit the data in packets, and add to each packet a CRC (cyclic redundancy check) field, for example of a length of 16 bits, which carries a checksum of the data of the packet. When a receiver receives the data, it calculates the same checksum on the received data and verifies whether the result of its calculation is identical to the checksum in the CRC field.
When the transmitted data is not used on-line, it is possible to request re-transmission of erroneous data when errors are detected. However, when the transmission is performed on-line such as in telephone lines, cellular phones, remote video systems, etc., it is not possible to request re-transmission.
Convolution codes have been introduced to alow receivers of digital data to correctly determine the transmitted data even when errors may have occured during transmision. The convolution codes introduce redundancy into the transmitted data to be transmitted, forming encoded packet data and pack the transmitted data into packets in which the value of each bit is dependent on earlier bits in the sequence. Thus, when a few errors occur, the receiver can still deduce the original data by tracing back possible sequences in the received data.
To further improve the performance of a transmission channel, some coding schemes include interleavers, which mix up the order of the bits in the packet during coding. Thus, when interference destroys a few adjacent bits during transmission, the effect of the interference is spread out over the entire original packet and can more readily be overcome by the decoding process. Other improvements may include multiple-component codes which include coding the packet more than once in parallel or in series. For example, U.S. Patent No. 5,446,747, which is incorporated herein by reference, describes an error correction method using at least two convolutional codings in parallel. Such parallel encoding is known in the art as “Turbo coding.”
For multiple component codes, optimal decoding is often a very complex task, and may require large periods of time, not usually available for on-line decoding. In order to overcome this problem, iterative decoding techniques have been developed. Rather than determining immediately whether received bits are zero or one, the receiver assigns each bit a value on a multi-level scale representative of the probability that the bit is one. A common scale, referred to as LLR probabilities, represents each bit by an integer in the range {−32, 31}. The value of 31 signifies that the transmitted bit was a zero with very high probability, and the value of −32 signifies that the transmitted bit was a one, with very high probability. A value of zero indicates that the value is indeterminate.
Data represented on the multi-level scale is referred to as “soft data,” and iterative decoding is usually soft-in/soft-out, i.e., the decoding process receives a sequence of inputs corresponding to probabilities for the bit values and provides as output corrected probabilities taking into account constraints of the code. Generally, a decoder which performs iterative decoding, uses soft data from former iterations to decode the soft data read by the receiver. A method of iterative decoding is described, for example, in U.S. Patent No. 5,563,897, which is incorporated herein by reference.
During iterative decoding of multiple-component codes, the decoder uses results from decoding of one code to improve the decoding of the second code. When parallel encoders are used, as in Turbo coding, two corresponding decoders may conveniently be used in parallel for this purpose.
The iterative decoding is carried out for a plurality of iterations until it is believed that the soft data closely represents the transmitted data. Those bits which have a probability indicating that they are closer to one (for example, between 0 and 31 on the scale described above) are assigned binary zero, and the rest of the bits are assigned binary one.
Generally, the iterative process is repeated a predetermined number of times. According to “An Introduction to Turbo Codes,” by Matthew C. Valenti, which can be found at lamarr.mprg.ee.vt.edu/documents/turbo.pdf and is incorporated herein by reference, the predetermined number of iterations is about 18. However, this article further states that in many cases as few as 6 iterations can provide satisfactory performance. “Iterative Decoding of Binary Block Codes,” by Joachim Hagenauer, Elke Offer and Lutz Papke, IEEE Trans. of Information Theory, Vol. 42, No. 2, pp. 429-445 (March 1996), which is incorporated herein by reference, suggests using a cross entropy criteria to determine when to stop the iterative decoding process individually for each packet. Thus, the calculation power of a decoder may be used more efficiently than when all packets are decoded using the same number of iterations. However, the cross entropy criterion is in itself very complex, reducing substantially the gain in efficiency in applying variable numbers of iterations.
In one commonly-used multiple-component coding scheme, the packet is first encoded by a first “outer” coding scheme. Thereafter, it is interleaved and is then encoded by a second “inner” coding scheme. During decoding, the inner code is first decoded, the result is de-interleaved, and then the outer code is decoded. The results of decoding the outer code are thereafter used in a second iteration of decoding the inner code to improve its results. This process is continued iteratively until the coded packet is satisfactorily decoded.
The above-described decoding scheme is typically implemented by a single hardware decoder, which alternately decodes the inner and outer codes. However, when very fast decoding is needed, and the inner and outer codes are substantially different, the computational load is generally beyond the capability of a single decoder of conventional design. Therefore, it has been suggested to use a decoder including two processors, one for the inner code and one for the outer code. However, this results in having each of the processors idle half of the time, while it waits for results from the other processor.
SUMMARY OF THE INVENTION
It is an object of some aspects of the present invention to provide methods and apparatus for fast iterative decoding of codes based on two or more different convolutional encoding schemes.
It is another object of some aspects of the present invention to provide apparatus for efficient iterative decoding of convolution codes.
It is a further object of some aspects of the present invention to provide an efficient method for determining how many iterations are needed for reliable decoding of a packet.
In preferred embodiments of the present invention, the decoding time allotted for decoding each code in a multi-code series or parallel coding scheme is made substantially equal. A decoder including two processors receives two packets of data in sequence and decodes them simultaneously. While one packet is being decoded in a first processor, the second processor decodes the second packet. When both processors finish a single iteration, the packets are switched between the processors, and another iteration is performed. Thus, both processors are substantially constantly in use, and codes may be decoded twice as fast as in prior art schemes of comparable hardware complexity. Preferably, both processors operate concurrently at least 50% of their operation time on any input packet.
In some preferred embodiments of the present invention, the two packets are decoded independently of each other, so that t

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Efficient iterative decoding does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Efficient iterative decoding, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Efficient iterative decoding will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2446307

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.