Method and system for training dynamic nonlinear adaptive...

Data processing: artificial intelligence – Neural network – Learning task

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C706S025000

Reexamination Certificate

active

06351740

ABSTRACT:

II. FIELD OF THE INVENTION
This invention relates to neural networks or adaptive nonlinear filters which contain linear dynamics, or memory, embedded within the filter. In particular, this invention describes a new system and method by which such filters can be efficiently trained to process temporal data.
III. BACKGROUND OF THE INVENTION
In problems concerning the emulation, control or post-processing of nonlinear dynamic systems, it is often the case that the exact system dynamics are difficult to model. A typical solution is to train the parameters of a nonlinear filter to perform the desired processing, based on a set of inputs and a set of desired outputs, termed the training signals. Since its discovery in 1985, backpropagation (BP) has emerged as the standard technique for training multi-layer adaptive filters to implement static functions, to operate on tapped-delay line inputs, and in recursive filters where the desired outputs of the filters are known—[
1
,
2
,
3
,
4
,
5
]. The principle of static BP was extended to networks with embedded memory via backpropagation-through-time (BPTT) the principle of which has been used to train network parameters in feedback loops when components in the loop are modeled [
6
] or un-molded [
7
]. For the special case of finite impulse response (FIR) filters, of the type discussed in this paper, the BPTT algorithm has been further refined [
8
]. Like BP, BPTT is a steepest-descent method, but it accounts for the outputs of a layer in a filter continuing to propagate through a network for an extended length of time. Consequently, the algorithm updates network parameters according to the error they produce over the time spanned by the training data. In essence, BP and BPTT are steepest descent algorithms, applied successively to each layer in a nonlinear filter. It has been shown [
9
] that the steepest descent approach is locally H

optimal in prediction applications where training inputs vary at each weight update, or training epoch. However when the same training data is used for several epochs, BPTT is suboptimal, and techniques which generate updates closer to the Newton update direction (see section 10) are preferable. We will refer to such techniques, which generate updates closer to the Newton update direction, as Newton-like methods.
Since steepest-descent techniques such as BPTT often behave poorly in terms of convergence rates and error minimization, it is therefore an object of this invention to create a method by which Newton-like optimization techniques can be applied to nonlinear adaptive filters containing embedded memory for the purpose of processing temporal data. It is further an object of this invention to create an optimization technique which is better suited to training a FIR or IIR network to process temporal data than classical Newton-like [
10
] techniques. It is further an object of this invention to create multi-layer adaptive filters which are Taylor made for specific applications, and can be efficiently trained with the novel Newton-like algorithm.


REFERENCES:
patent: 4843583 (1989-06-01), White et al.
patent: 5175678 (1992-12-01), Frerichs et al.
patent: 5272656 (1993-12-01), Genereux
patent: 5376962 (1994-12-01), Zortea
patent: 5542054 (1996-07-01), Batten, Jr.
patent: 5548192 (1996-08-01), Hanks
patent: 5617513 (1997-04-01), Schnitta
patent: 5761383 (1998-06-01), Engel et al.
patent: 5963929 (1999-10-01), Lo
patent: 6064997 (2000-05-01), Jagannathan et al.
Ong et al, “A Decision Feedback Recurrent Neural Equalizer as an Infinite Impulse Response Filter”, IEEE Transactions on Signal Processing, Nov. 1997.*
Rui J. P. de Figueiredo, “Optimal Neural Network Realizations of Nonlinear FIR and IIR Filters” IEEE International Syposium on Circuits and Systems, Jun. 1997.*
Yu et al, “Dynamic Learning Rate Optimization of the Back Propagation Algorithm”, IEEE Transactions on Neural Networks, May 1995.*
White et al., “The Learning Rate in Back-Proprogation Systems =an Application of Newton's Method”, IEEE IJCNN, May 1990.*
Nobakht et al, “Nolinear Adaptive Filtering using Annealed Neural Networks”, IEEE International Sympsium on Circuits and Systems, May 1990.*
Pataki, B-, “Neural Network based Dynamic Models,” Third International Conference on Artificial Neural Networks, IEEE. 1993*
Dimitri P-Bertsekas, “Incremental Least Squares, Methods and the Extended kalman Filter”, IEEE proceedings of the 33rd conference on Decision and control Dec. 1994.*
Puskorius et al, “Multi-Stream Extended Kalman” Filter Training for Static and Dynamic Neural Networks IEEE International conference on System, Man, and Cybernetics-Oct. 1997.*
Back et al, “Internal Representation of Data, in Multilayer Perceptrons with IIR Synapses”, proceedings of 1992 I International Conference on Circuits and Systems May 1992.*
Sorensen, O, “Neural Networks Performing System Identification for Control Applications”, IEEE Third International Conference on Artificial Neural Networks, 1993.*
Back et al, “A Unifying View of Some Training Algorithm for Multilayer Perceptrons with FIR Filled Synapses”, Proceedings of the 1994 IEEEE. Sep. 1994.*
Workshop on Neural Networks for Signal Processing.*
Sorensen, O, “Neural Networks for Non-Linear Control”, Proceedings of the Third IEEE Conference on Control Applications, Aug. 1994.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Method and system for training dynamic nonlinear adaptive... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Method and system for training dynamic nonlinear adaptive..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and system for training dynamic nonlinear adaptive... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2958553

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.