Adaptive filtering and learning system having increased...

Electrical computers: arithmetic processing and calculating – Electrical digital calculating computer – Particular function performed

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

Reexamination Certificate

active

06732129

ABSTRACT:

The present invention is directed to a method and apparatus for adaptive filtering or “learning”, which increases both the stability and the rate of convergence of the system, while decreasing its residual error. For an adaptive filter, the adaptive coefficient is variable to produce a more efficient adaptation of the “weights”. The method and apparatus according to the invention can be used in conjunction with known adaptive filtering techniques and devices, using for example the LMS algorithm, or any other technique such as Newton, Steepest-Descent, SHARF, SER, random search, LMS-Newton, RLS, Widrow, Griffith's LMS, which are known to those skilled in the art.
BACKGROUND OF THE INVENTION
Adaptive systems are systems that can adapt to changing environments or changing system requirements, self-optimize, self-design, self-train, repair themselves, and adapt around internal defects, by adjustment of system parameters or so-called “weights” W which determine a transfer function of the filter. Adaptive filtering and learning techniques are used in a variety of applications, such as signal processing, communications, numerical analysis, instrumentation and measurement, analog/digital conversion, digital filtering, radar, sonar, seismology, mechanical design, navigation systems, biomedical electronics, echo canceling, predictors, inverse modeling, deconvolution, equalization, inverse filtering, adaptive control systems, adaptive arrays, feedforward and feedback systems (open and closed loop systems), prediction, system identification (modeling), cellular or wireless products, interference canceling, signal canceling, to determine solutions to sets of equations, etc.
In adaptive filtering or learning techniques, the adaption process utilizes the following generic algorithm (for systems having one weight):
W
k+1
=W
k
+&mgr;(−∇
k
)
where W
k
is the value of the weight at time k; &mgr; is the adaptive coefficient; and −∇
k
is the gradient or direction to adapt (having both magnitude and direction), which can also be an approximation.
For systems having multiple weights (that is, systems which are designed to emulate processes or phenomena which are characterized by more than one parameter):
W
k+1
=W
k
+&mgr;(−∇
k
)
where W
k
is a vector of the weight values at time k; &mgr; in this case is a vector (i.e., each weight has its own adaptive coefficient); and −∇
k
is the gradient vector or direction to adapt, as described above.
It is known that the adaptive coefficient &mgr; has the following influences:
A small &mgr; translates into a slow, stable convergence, with a smaller effect due to noise, and also has a small residual error.
A larger &mgr; converges more quickly, but is less stable, and noise becomes more of a factor.
Accordingly, it is advantageous to vary the value of &mgr; which is used in the above generic adaptive filtering algorithm in a manner which improves the convergence rate and stability while decreasing the residual error in the system output, even in the presence of significant noise. Adaptive filtration techniques which vary the adaptive coefficient for this purpose are known. For example, U.S. Pat. No. 4,791,390 discloses an adaptive filter in which a variable scale factor is chosen for each iteration of an adaptive process, based on sign changes in the incremental weight changes. Similarly, U.S. Pat. No. 4,349,889 discloses a non-recursive filtering technique in which a sequence of delayed versions of the input signal are weighted using a sequence of coefficients which are each adjusted iteratively by positive and negative correction steps. Larger or smaller step sizes are implemented depending on the number of preceding correction steps for which the sign is unchanged.
The filtering arrangements disclosed in the '390 and '889 patents both achieve enhanced convergence relative to systems in which the scale factor or adaptive coefficients are fixed. However, each of these systems is limited in that the change of the scale factor is based solely on evaluation of the sign of each step in the process. Thus, in neither device or technique is the overall stability of the convergence of the system fully taken into account. Accordingly, neither attempts to maintain stability when speeding convergence. Moreover, in diverging cases, the divergence may actually occur much more quickly.
It is therefore an object of the preset invention to provide an improved adaptive filtering method and apparatus in which the convergence speed and residual error values are enhanced by taking into account indicia of the convergence stability of the system. The present invention considers the spatial vector of each step, rather then just its sign.
SUMMARY OF THE INVENTION
In the adaptive filtering or learning system according to the present invention, convergence of the system is optimized by taking into account the stability with which it converges to a desired state. That is, in the adaptive technique according to the invention, for optimal convergence, a large &mgr; value is used in stable regions with relatively smaller noise, as determined by the criteria described in detail hereinafter. On the other hand, a smaller &mgr; is used in unstable or noisy environments. Also when the solution has been reached (that is, the system has converged), &mgr; is decreased to decrease the effects of noise and the residual error.
Therefore, the method and apparatus according to the invention provides an optimum rate of convergence based on the overall stability of the system, while minimizing the residual error. This method becomes very valuable in non-steady state conditions because the variable adaptive filter will allow the weights to behave appropriately when conditions are changing. The weights will adapt towards the solution and “settle” at the solution until conditions change, then the weights will adapt to the new environment.
For a better understanding of the explanation which follows, the following definitions are provided:
Stable convergence refers to a convergence path that is transcending in a stable manner, based on criteria which may be specified, for example, in terms of the time variation or “curvature” of the step size in the weights, or in other manners, as described hereinafter.
FIG. 2
is an example of stable convergence.
Unstable convergence refers to a convergence path that is transcending in an unstable manner, based on criteria described herein. The latter may also be specified in terms of the curvature of the step size in the weights or in other manners.
FIG. 3
shows an example of unstable convergence.
Noisy Solution is a solution which exhibits “jitter” or oscillation of the weights around the solution due to noise when the weights have reached convergence, leading to residual error.
FIG. 4
is an example of a noisy solution.
Spacing means the amount of space between points used to determine if the weights are transcending in a stable or unstable manner, as shown in FIG.
5
.
Detail refers to the number of points used to determine the stability of the convergence path of the weights; it is thus a measure of how detailed the curve will be; this term is also used herein to refer to the individual data points themselves. See FIG.
6
.
Stability Level is a measure of how stable or unstable the convergence path is, determined as described hereinafter.
In the method and apparatus according to the invention, the adaptive coefficient &mgr; is increased, decreased, or kept constant (in a manner described in detail hereinafter), depending on certain criteria as to whether the values of W are stable or unstable. It should be noted in this regard that although the algorithm discussed herein is presented as dependent upon the values of the weights (W) or its increments, the same effect may be derived using the values of the gradient (−∇
k
) or an estimate of it. Also, the error (or any other signal used to estimate the gradient) may also be used to implement the same algorit

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Adaptive filtering and learning system having increased... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Adaptive filtering and learning system having increased..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Adaptive filtering and learning system having increased... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3205052

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.