Cache and caching method for conventional decoders

Pulse or digital communications – Receivers – Particular pulse demodulator or detector

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C375S262000

Reexamination Certificate

active

06580767

ABSTRACT:

FIELD OF THE INVENTION
The invention relates generally to convolutional decoders and more particularly to convolutional decoders with forward and backward recursion, such as soft decision output decoders and methods.
BACKGROUND OF THE INVENTION
Soft decision output decoders, such as maximum a posteriori (MAP) decoders are known for use with convolutional codes to decode symbols that represent transmitted information, such as voice, video, data or other information communicated over wireless communication link. Convolutional decoders with forward and backward recursion are known and sometimes used to decode turbo codes. Such decoders perform recursive decoding which uses soft decision information from one pass of a MAP algorithm as input to a next pass of a MAP algorithm to obtain a best estimate of transmitted information data as determined from decoded symbols. As known in the art, a forward recursion decoder generally evaluates received information and determines the probability that the encoder was in a particular state from a beginning of a sequence. Backward recursion decoders compute the probability that the state of the encoder was in a given state from an end of the sequence. Hence, the computation of this probability for backward recursion decoders typically starts from an end of a frame. Such convolutional decoding may utilize seven to eight passes. One example of a MAP decoder is described, for example, in an article entitled “Near Shannon Limit Error-Correcting Coding and Decoding: Turbo-Codes”,: by C. Berrou et al., Proceedings of ICC '93, Geneva, Switzerland, pp. 1064-1070, May, 1993, incorporated herein by reference.
A MAP decoder may be made up of two or more constituent decoders. For example, a MAP decoder may be made up of three constituent decoders where one processes data forward in time and two decoders process data backwards in time at staggered offsets. The frame of data to be decoded is divided into blocks of length L. All three constituent decoders run simultaneously, and the input data associated with three different blocks of the frame are needed at any given time. For example, with a turbo decoder having a forward recursion decoder and two backward recursion decoders, there is a need to buffer a frame to hold all symbols for all decoders that are running at the same time. Data must be fed to each of the decoders. Each of the decoders typically requires different symbols of the frame at different times but all decoders require data at the same time to provide efficient parallel processing.
For example, in a high speed implementation, a block or section of a frame may be processed at 60 MHz. For a rate one half decoder, two input symbols, (assuming byte-wide) are needed for every state update, requiring that an external symbol data RAM be accessed at two bytes times 60 MHz, which equals 120 megabytes per second. With no internal caching, the requirement is tripled to 360 megabytes per second, due to the three constituent decoders. Additionally, for each constituent decoder, one piece of extrinsic data (also assuming byte wide) is needed per state update. Accesses to a second external RAM would then require 60 megabytes per second per decoder, or 180 megabytes per second without caching. This can unnecessarily increase the cost of the decoders. Although the use of convolutional encoding with three constituent decoders has allowed for hardware processing (and software processing, if desired) of long turbo encoded sequences, the bandwidth required to feed input data to this high speed decoder can be prohibitive, and can lead to the addition of large amounts of internal cache memory in the decoding hardware.
One solution may be to use three different frame buffers wherein each frame buffer stores the same information and each frame buffer is dedicated for each of the constituent decoders. However, the size and cost of such an arrangement can be prohibitive. Another solution may be for each decoder to access a common frame buffer three times faster, since each decoder needs data from a same frame buffer at the same time. With seven iterations of turbo-decoding, this required increase in processing speed can drastically increase the cost of the memory.
Accordingly, there exists a need for a convolutional decoder and method with forward and backward recursion that employs a cost effective memory configuration to facilitate convolutional decoding.


REFERENCES:
patent: 5822341 (1998-10-01), Winterrowd et al.
patent: 5933462 (1999-08-01), Viterbi et al.
patent: 5983328 (1999-11-01), Potts et al.
patent: 6028899 (2000-02-01), Petersen
patent: 6425054 (2002-07-01), Nguyen
“Near Shannon Limit Error-Correcting Coding and Decoding: Turbo-Codes”, : by C. Berrou et al., Proceedings of ICC '93, Geneva, Switzerland, pp. 1064-1070, May, 1993.
“Efficient Implementation of Continuous MAP Decoders and a Synchronisation Technique for Turbo Decoders”, by Steven S. Pietrobon, International Symposium on Information Theory and its Applications, Victoria, BC Canada, pp. 586-589, Sep. 1996.
“An Intuitive Justification and a Simplified Implementation of the MAP Decoder for Convolutional Codes”, by A Viterbi, IEEE Journal on Selected Areas in Communications, vol. 16, No. 2, Feb. 1998.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Cache and caching method for conventional decoders does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Cache and caching method for conventional decoders, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Cache and caching method for conventional decoders will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3115892

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.