Error detection/correction and fault detection/recovery – Pulse or data error handling – Digital data error correction
Reexamination Certificate
2001-03-22
2004-06-01
Baker, Stephen M. (Department: 2133)
Error detection/correction and fault detection/recovery
Pulse or data error handling
Digital data error correction
C714S757000
Reexamination Certificate
active
06745362
ABSTRACT:
The field of this invention is coding of numeric data. More precisely, the invention relates to error correction codes. The invention is particularly but not exclusively applicable to the coding of source data organized into data blocks independent from each other and therefore that need to be coded and decoded individually, and coding of data flows (convolutional coding).
Many coding techniques for correcting transmission errors are already known. In particular, refer to documents by F. J. McWilliams and N. J. A. Sloane [1], S. B. Wicker [2], and V. Pless et al [3] (all references mentioned in this application have been grouped in the appendix to simply reading). Error correction codes are used to correct transmission errors inherent to all communication channels. Thus, they are very frequently used in telecommunications, remote broadcasting and information storage, for example on laser disks and magnetic disks, etc.
For example, these errors may be caused by thermal noise in electronic components of the receiver, accidental or deliberate electromagnetic interference, echoes or multiple propagation in the case of radio propagation (for example for radiotelephony systems such as GSM, digital radio broadcasting such as DAB or digital television such as DVB) or in an electrical power network (for example for telecommunications on the electrical network).
The first studies on error correction codes date from the 1940s. In his articles written in 1948, Claude Shannon [4] wrote the foundations of the information theory based on which digital communication systems are still (and always will be?) designed.
One of Shannon's results is his theorem concerning the limiting capacity of a communication channel, which is defined by the following well known formula:
C
=
B
·
Log
⁡
(
1
+
S
N
)
where the capacity C of the channel is an information throughput expressed in bits/s, B is the pass band of the channel in Hz and S/N is the ratio of the signal and noise powers in the pass band.
This theorem is expressed as follows: “There is a transmission capacity C (in bits per second) associated with every channel. There are error correction codes such that the information can be transmitted through a channel at a rate lower than its capacity C with an arbitrarily low bit error rate by using a sufficiently long error correction code”.
Unfortunately, Shannon's theorem is no more than a proof of existence. His publication was the starting point for a search for “good” codes that has now been going on for 50 years. The main codes that have attracted the attention of the telecommunications community were firstly block codes to transmit bit packets; Hamming codes, Golay codes, Reed-Muller codes, BCH codes, Reed-Solomon codes, etc.
The principle of block error correction codes was invented by R. W. Hamming in 1950: starting from k useful information bits, (n−k) redundancy bits are calculated using modulo 2 additions of k information bits, therefore the total length of each block, also called a code word, is n bits. On reception, if there are any errors present in the received n-bit word, the presence of the (n−k) redundancy bits adjacent to the k useful information bits can be used to correct some of the errors present.
If the ratio of the signal
oise powers exceeds a particular value, then the cost of the information flow occupied by code redundancy bits is more than compensated by the reduction, or possibly almost complete elimination, of the bit error rate (BER) after decoding. This gain in performance is called the coding gain, for a fixed BER, and is given in decibels (dB). A coding gain of 3 dB means that the power of the signal transmitted with coding can be halved compared with the same BER without coding (3 dB≈10.Log
10
(2)).
Another large and more recent coding family is the convolutional codes family introduced by P. Elias [5] in 1955. These codes transform an infinite series of information symbols into several other infinite series of information symbols. These codes were invented to transmit continuous information flows. A. J. Viterbi [6] (in 1967) and G. D. Forney [7] (in 1973) have demonstrated the possibilities of efficiently decoding “small” convolutional codes with the “Viterbi” algorithm (an algorithm based on the dynamic programming theory (See Bellmann [8] (1957)).
Block (error correction) codes and convolutional codes may be methoded using the same theoretical tools and the same decoding methods. Block codes and convolutional codes can be represented by special graphs called “trellis” from which their weight distribution can be calculated and that can be used for decoding using the Viterbi algorithm (see Forney [7] and McEliece [9]).
Each code word is represented by a different path in the trellis and the best path, in other words the code word closest to the received word, is found by decoding using the Viterbi algorithm. But for a block code or for a convolutional code, the complexity of the decoding increases exponentially at the power of 2(n−k) for a block code, and 2v for a convolutional code (were n is the size of the memory of the convolutional coder). The dilemma is that as the length and power of the code increases, it corrects more errors, but it is also more expensive in time and decoding equipment.
Recently, in 1993, C. Berrou and A. Glavieux [10] presented a new coding scheme usually called the “turbo-code”. There are four main concepts in turbo-codes:
1) Use recursive convolutional coding, in other words with an infinite pulse response (and not a finite pulse response as is conventionally used).
2) Code useful information bits once in their initial order, then code the same useful data a second time but in a different order, in other words swapped around. The useful bits are transmitted only once, and coders only transmit their redundancies. Swapping, also called interlacing, is “magic” since the code is able to correct more errors as it becomes larger. Interlacing decorrelates errors created by the transmission channel.
3) Use soft decoding techniques introduced by Bahl et al (BCJR algorithm) [11], G. Battail [12], or Hagenauer (SOVA (“Soft-Output Viterbi Algorithm”) [13], etc.
4) By definition, let the output Li (of each elementary soft decoder i receiving “information” Li(
0
) in iteration i) be the sum of the received initial “information L
0
and extracted or extrinsic information Wi such that LI=L
0
+Wi. In the next iteration (i+1), only part of the extrinsic information is reinjected such that Li+1(
0
)=L
0
+ai+1Wi, where the coefficients ai vary from 0 in the first iteration and can reach 1.0 in the last iteration. There coefficients ai are optimized to minimize the BER starting from a given signal
oise ratio. Strictly speaking, the L and W quantities are logarithms of the probability ratio and not information quantities.
Berrou and Glavieux's first turbo-codes were convolutional codes based on a large swap matrix (256×256=65536) with performances close to the Shannon limit for low signal to noise ratios. But some of the most recent telecommunication and radio broadcasting systems require small information packets to be transmitted (for example radio back scattering, mobile telephone, cordless ATM, Internet, etc.).
Furthermore, the disadvantage of large swaps is an incompressible delay in methoding which can be very annoying in a real time service, for example as in a telephone conversation. This is why research is continuing to find good block “turbo-codes”. In particular, R. Pyndiah et al [14] have adapted the iterative decoding concept to Elias product codes [5] for small block codes (n<1024). And recently in 1997, Berrou et al [17] adapted their turbo-codes (by closing trellis) to the construction of high performance block codes of this order of size (n<2048) called FOTC (Fram
Carlac'h Jean-Claude
Vervoux Cyril
Baker Stephen M.
France Telecom
Merchant & Gould P.C.
LandOfFree
Method and device for error correction coding and... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Method and device for error correction coding and..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and device for error correction coding and... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3361519