Initialization /prewindowing removal postprocessing for fast...

Electrical computers: arithmetic processing and calculating – Electrical digital calculating computer – Particular function performed

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

Reexamination Certificate

active

06643676

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to adaptive filters and more particularly to fast recursive least squares (fast RLS) adaptive filters.
2. State of the Art
Adaptive filters find widespread use in communications. A known class of adaptive filter is the fast RLS adaptive filter. This class of adaptive filter, or more accurately, this class of filter adaptation algorithm, is recognized for its properties of fast convergence and reduced computational complexity compared to earlier RLS algorithms (although the computational complexity is still very considerable).
In one aspect, the attractiveness of RLS algorithms lies in the ability to compute updated filter settings using a new data input together with the old filter settings. This “recursion” is performed differently in different variations of the algorithm. In one variation, designated by the term “growing window covariance” (GWC), the adaptation algorithm takes into account a growing window of data that begins at time zero, for example, and is incrementally extended at each sample time until the adaptation has concluded. In another variation, designated by the term “sliding window covariance” (SWC), the adaptation algorithm takes into account a window of data that at a particular point in the algorithm becomes fixed in size. In the SWC algorithm, once the size of the data window has been fixed, as updates are computed, they incorporate knowledge from a new sample and “disincorporate” knowledge from an oldest sample.
In general, various different methods of windowing the input data of an adaptive filter are known. In order to achieve a favorable mathematical structure for computational efficiency, the most common implementations of the fast RLS algorithm use prewindowing. The prewindowing method makes the assumption that the input data prior to time zero are zero. This assumption results in a “prewindowing transient” during which the error quantity is unusually large. In other words, prewindowing perturbs the algorithm, delaying convergence. The GWC version of RLS does not require prewindowing. Without prewindowing, however, the RLS algorithm becomes computationally more burdensome and more implementation-sensitive.
Initialization also perturbs the algorithm. As described in J. M. Cioffi and T. Kailath, “Fast, recursive least squares transversal filters for adaptive filtering”, IEEE Trans. on ASSP, ASSP-32(2):304-337, April 1984, more rapid convergence may be obtained by initializing the input data matrix with a sparse “fictitious” data submatrix located within a region corresponding to negative time. A sample covariance matrix is then initialized so as to be in agreement with the fictitious data. A desired response data vector (or matrix) may also be initialized with some fictitious data to reflect prior information on the filter settings. These initialization techniques may be referred to as “soft constraint initialization”. As compared to initializing the same quantities with zeros, the foregoing technique may significantly reduce the amount of perturbation experienced by the algorithm if the prior information is correct. Substantial perturbation remains, however, if the assumed prior information is incorrect, as is usually the case.
In order to overcome the deleterious effects of prewindowing and initialization, an exponential “forgetting factor” is commonly used. As more and more data is processed, the influence of old data becomes exponentially more attenuated. Besides forgetting “incorrect” data, however, the same forgetting also gets applied to actual data, with the result that the total amount of data available is used less than efficiently. If this forgetting factor could be eliminated entirely, the result would be better estimation performance and fewer numerical problems.
Further background information concerning RLS adaptation algorithms may be found in the following references, incorporated herein by reference:
T. Kailath, Lectures on Wiener and Kalman Filtering, Springer-Verlag, Wien—New York, 1981.
S. Haykin, Adaptive Filter Theory, Prentice-Hall, Englewood Cliffs, N.J., 1995, third edition.
B. Widrow and S. D. Stearns, Adaptive Signal Processing, Prentice-Hall, Englewood Cliffs, N.J., 1985.
B. Widrow and E. Walach, “On the Statistical Efficiency of the LMS Algorithm with Nonstationary Inputs”, IEEE Trans. on Information Theory, IT-30(2):211-221, March 1984, Special Issue on Adaptive Filtering.
D. T. M. Slock, “On the Convergence Behavior of the LMS and the Normalized LMS Algorithms”, IEEE Trans. on Signal Processing, 41(9):2811-2825, September 1993.
S. M. Kay, Fundamentals of Statistical Signal Processing: Estimation Theory, Prentice Hall 1993.
E. Eleftheriou and D. Falconer, “Tracking Properties and Steady-State Performance of RLS Adaptive Filter Algorithms”, IEEE Trans. ASSP, ASSP-34(5): 1097-1110, October 1986.
D. D. Falconer and L. Ljung, “Application of Fast Kalman Estimation to Adaptive Equalization”, IEEE Trans. Com., COM-26(10):1439-1446, October 1978.
J. M. Cioffi and T. Kailath, “Fast, recursive least squares transversal filters for adaptive filtering”, IEEE Trans. on ASSP, ASSP-32(2):304-337, April 1984.
D. T. M. Slock and T. Kailath, “Numerically Stable Fast Transversal Filters for Recursive Least Squares Adaptive Filtering”, IEEE Trans. Signal Proc., ASSP-39(1):92-114, January 1991.
D. T. M. Slock, “Backward Consistency Concept and Round-Off Error Propagation Dynamics in Recursive Least Squares Algorithms”, Optical Engineering, 31(6): 1153-1169, June 1992.
J. M. Cioffi and T. Kailath, “Windowed Fast Transversal Filters Adaptive Algorithms with Normalization”, IEEE Trans. on ASSP, ASSP-33(3):607-625, June 1985.
J. M. Cioffi, “The Block-Processing FTF Adaptive Algorithm”, IEEE Trans. on ASSP, ASSP 34(1):77-90, February 1986.
SUMMARY OF THE INVENTION
The present invention, generally speaking, accelerates convergence of a fast RLS adaptation algorithm by, following processing of a burst of data, performing postprocessing to remove the effects of prewindowing, fictitious data initialization, or both. This postprocessing is part of a burst mode adaptation strategy in which data (signals) get processed in chunks (bursts). Such a burst mode processing approach is applicable whenever the continuous adaptation of the filter is not possible (algorithmic complexity too high to run in real time) or not required (optimal filter setting varies only slowly with time). Postprocessing consists of a series of “downdating” operations (as opposed to updating) that in effect advance the beginning point of the data window. The beginning point is advanced beyond fictitious data used for initialization and beyond a prewindowing region. In other variations, downdating is applied to data within a prewindowing region only. The forgetting factor of conventional algorithms can be eliminated entirely. Performance equivalent to that of GWC RLS algorithms is achieved at substantially lower computational cost. In particular, a postprocessing Fast Kalman Algorithm in effect transforms an initialized/prewindowed least squares estimate into a Covariance Window least squares estimate. Various further refinements are possible. Initialization may be cancelled completely or only partially. For example, in order to reduce the dynamic range of algorithmic quantities, it may be advantageous to, in a subsequent initialization, add an increment to a forward error energy quantity calculated during a previous burst. Postprocessing may then be performed to cancel only the added increment. Also, to reduce the usual large startup error transient, the desired response data can be modified in a way that dampens the error transient. The modified desired response data are saved for use in later postprocessing. Furthermore, to allow for more rapid adaptation without the use of an exponential forgetting factor, a weighting factor less than one may be applied to the forward error energy quantity during initialization from one burst to the next. This allows for the most efficient use of data but limited a

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Initialization /prewindowing removal postprocessing for fast... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Initialization /prewindowing removal postprocessing for fast..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Initialization /prewindowing removal postprocessing for fast... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3142210

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.