Error detection/correction and fault detection/recovery – Pulse or data error handling – Data formatting to improve error detection correction...
Reexamination Certificate
2002-05-17
2004-01-13
Decady, Albert (Department: 2133)
Error detection/correction and fault detection/recovery
Pulse or data error handling
Data formatting to improve error detection correction...
C714S786000
Reexamination Certificate
active
06678843
ABSTRACT:
TECHNICAL FIELD OF THE INVENTION
The invention relates to high-speed and low power channel coding in communication systems and coders and decoders providing channel coding and decoding.
BACKGROUND OF THE INVENTION
In digital communication systems, reliable transmission is achieved by means of channel coding, a class of Forward Error Correction (FEC) techniques. Coding the information means adding redundancy to the bit stream at the transmitter side, so that it can be properly reproduced at the receiver side.
Ever more (wireless) networks and services are emerging. Therefore, (wireless) communication systems should strive to utilize the spectrum capacity to its maximum. The theoretical limits of the achievable capacity on a communication channel were set by Shannon's fundamental concepts almost 60 years ago, as described in C. Shannon, “A mathematical theory of communications”, Bell Sys. Tech. Journal, vol. 27, October 1948. Decades of innovations in digital communication, signal processing and very large scale integration (VLSI) were needed to bring the efficiency of practical systems near the theoretical bounds. Only recently, a new FEC coding scheme, turbo coding, was conceived, which allows to approach Shannon's limit much closer than any FEC scheme previously known. This coding scheme is described in C. Berrou, A. Glavieux, P. Thitimajshima, “Near Shannon limit error-correcting coding and decoding: turbo-codes”, Proc. IEEE ICC, pp.1064-1070, May 1993. In this technique, large coding gains (meaning less transmission power for the same bit error rate (BER) are obtained using two or more constituent codes working on different versions of the information to be transmitted. Decoding is done in an iterative way, using a different decoder for each constituent encoder. The information provided by one decoder is processed iteratively by the other decoder until a certain degree of refinement is achieved. A general turbo coding/decoding scheme for Parallel Concatenated Convolutional Code (PCCC) is depicted in FIG.
1
. The information bitstream I to be transmitted is encoded by a first encoder C
1
and a second encoder C
2
, e.g. in a pipeline. The second encoder C
2
works on an interleaved version of the information bitstream
1
, produced by an interleaver Π. The interleaver Π randomises the information bitstream I to uncorrelate the inputs of the two encoders C
1
, C
2
. Three bitstreams are transmitted: the information bitstream itself X
k
(called the systematic sequence), the coded sequence Y
k
1
and the coded sequence Y
k
2
(both called parity sequences). The decoding process begins by receiving partial information from the channel (X
k
and Y
k
1
) and passing it to a first decoder D
1
. The rest of the information, parity
2
(Y
k
2
), goes to a second decoder D
2
and waits for the rest of the information to catch up. Decoding is based preferably e.g. on a Maximum A Posteriori (MAP) decoding algorithm or on a Soft Output Viterbi Algorithm (SOVA). While the second decoder D
2
is waiting, the first decoder D
1
makes an estimate of the transmitted information, interleaves it in a first interleaver Π
1
to match the format of parity
2
, and sends it to the second decoder D
2
. The second decoder D
2
takes information from both the first decoder D
1
and the channel and re-estimates the information. This second estimation is looped back, over a second interleaver, being deinterleaver Π
1
−1
to the first decoder D
1
where the process starts again. The main idea behind iterative decoding is that decoded data is continuously refined. Part of the resulting decoded data (called extrinsic information) produced by each decoder D
1
resp. D
2
in each iteration is then fed back to the other decoder D
2
resp. D
1
to be used in another iteration step. Interleaving/deinterleaving stages Π
1
, Π
1
−1
between the two decoders D
1
, D
2
are incorporated to adapt the sequences to the order defined in the encoding step. This cycle of iterations will continue until certain conditions are met, such as a certain number of iterations are performed. The resulting extrinsic information is then no more relevant and the process may stop. The result is the decoded information bitstream U.
Turbo codes have rapidly received a lot of attention, and have been the focus for research since their first publication. Indeed, a gain of 3 dB over conventional coding schemes can be translated into a doubling of battery time, or a gain of 20% in bandwidth efficiency. Knowing the value of these resources, the enormous interest in turbo coding is very evident. As a consequence of their near to optimal performance, turbo coding schemes are now one of the main candidates for upcoming systems such as Universal Mobile Telecommunications Systems (UMTS), satellite UMTS and Digital Video Broadcasting (DVB), as described in 3
rd
Generation Partnership Project (3GPP), Technical Specification Group (TSG), Radio Access Network (RAN), Working Group1, “Multiplexing and channel coding”, TS 25.222 V1.0.0 Technical Specification, 1999-04. The acceptance of turbo coding has been spectacular, e.g. as evidenced by the number of publications and theoretical developments, as shown during the 2
nd
International Symposium on Turbo Codes & Related Topics, September 2000, Brest, France. In contrast, the hardware implementation of the turbo codes is following this evolution only very slowly. Speed, latency, and most of all power consumption and significant technical problems in implementing the turbo coding principles. Ideally, speeds in the order of 100 Mbps should be achieved in order to meet the ever-growing speed demands. High-speed data services require high coding gains, making concatenated coding with iterative decoding (turbo coding) highly suitable. The performance advantage of turbo coding comes at the cost of increased digital processing complexity and decoding latency. The penalty in complexity (operations per bit) is typically an order of magnitude, if the turbo coding scheme is implemented in a straightforward way. The latency-bottleneck needs to be solved if high-speed, low power turbo coders for real-time applications are envisaged. Current commercially available turbo coding solutions, such as e.g. from Small World Communications, Payneham South, Australia, from sci-worx, Hannover, Germany or from Soft DSP, Seoul, Korea, do not match the speed and power requirements imposed by current high-end communication systems.
Recently, some components for high-speed turbo coders, appropriate for real-time wireless communication (i.e. with low power consumption and low latency) have been reported on in literature, such as e.g. in G. Masera, G. Piccinini, M. Ruo roch, M. Zamboni, “VLSI architectures for Turbo codes”, IEEE Transactions in VLSI Systems, 7(3):369-378, September 1999, in J. Dielissen et Al., “Power-Efficient Application-Specific VLIW Processor for Turbo decoding”, in ISSCC 2001, San Francisco February 2001, or in Hong, Waynem, and Stark, “Design and Implementation of a Low Complexity VLSI Turbo-Code Decoder Architecture for Low Energy Mobile Wireless Communications”, Proceedings of ISLPED 99, 1999. These advanced turbo coders, described in these recent publications, almost all use ‘overlapping sliding windows’ (OSW) in the decoding processes to increase speed and decrease power consumption. Even better architectures for turbo decoding at high speed, low power, and low latency, have been reported in A. Giulietti, M. Sturm, F. Maessen, B. Gyselinckx, L. van der Perre, “A study on fast, low-power VLSI architectures for turbo codes”, International Microelectronics Symposium and Packaging, September 2000, as well as U.S. patent application Ser. No. 09/507,545, entitled “Method and System Architectures for Turbo-Decoding”. While solutions for optimizing the decoding processes are available, no attractive results for speeding up the interleaving and de-interleaving operations have been proposed.
As discussed before, turbo decoders, despite their performance close to the channel
Bougard Bruno
Cosgul Gokhan
Derudder Veerle
Giese Jochen Uwe
Giulietti Alexandre
Chase Shelly A.
De'cady Albert
Interuniversitair Microelektronics Centrum (IMEC)
Knobbe Martens Olson & Bear LLP
LandOfFree
Method and apparatus for interleaving, deinterleaving and... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Method and apparatus for interleaving, deinterleaving and..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and apparatus for interleaving, deinterleaving and... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3194671