Fast temporal neural learning using teacher forcing

Patent

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

395 20, 395 21, G06F 1518

Patent

active

054287103

ABSTRACT:
A neural network is trained to output a time dependent target vector defined over a predetermined time interval in response to a time dependent input vector defined over the same time interval by applying corresponding elements of the error vector, or difference between the target vector and the actual neuron output vector, to the inputs of corresponding output neurons of the network corrective feedback. This feedback decreases the error and quickens the learning process, so that a much smaller number of training cycles are required to complete the learning process. A conventional gradient descent algorithm is employed to update the neural network parameters at the end of the predetermined time interval. The foregoing process is repeated in repetitive cycles until the actual output vector corresponds to the target vector. In the preferred embodiment, as the overall error of the neutral network output decreases during successive training cycles, the portion of the error fed back to the output neurons is decreased accordingly, allowing the network to learn with greater freedom from teacher forcing as the network parameters converge to their optimum values. The invention may also be used to train a neural network with stationary training and target vectors.

REFERENCES:
patent: 4804250 (1989-02-01), Johnson
patent: 4912652 (1990-03-01), Wood
patent: 4918618 (1990-04-01), Tomlinson, Jr.
patent: 4926064 (1990-05-01), Tapang
patent: 4951239 (1990-08-01), Andes et al.
patent: 4953099 (1990-08-01), Jourjine
patent: 4967369 (1990-10-01), Jourjine
patent: 4990838 (1991-02-01), Kawato et al.
patent: 5014219 (1991-05-01), White
patent: 5046019 (1991-09-01), Basehore
patent: 5046020 (1991-09-01), Filkin
patent: 5050095 (1991-09-01), Samad
patent: 5052043 (1991-09-01), Gaborski
patent: 5056037 (1991-10-01), Eberhardt
patent: 5058034 (1991-10-01), Murphy et al.
patent: 5075868 (1991-12-01), Andes et al.
patent: 5086479 (1992-02-01), Takenaga et al.
patent: 5093899 (1992-03-01), Hiraiwa
patent: 5146602 (1992-09-01), Holler et al.
patent: 5253329 (1993-10-01), Villarreal et al.
patent: 5313558 (1994-05-01), Adams
Michail Zak, "Terminal Attractors in Neural Networks," Neural Networks, vol. 2, pp. 259-274, 1989.
J. Barhen, et al., "Application of Adjoint Operators to Neural Learning", Aool. Math. Lett., vol. 3 No. 3, pp. 13-18, 1990 Printed in Great Britain.
J. Barhen, et al., "Adjoint Operator Algorithms for Faster Learning Dynamical Neural Networks", Center for Space Micro-electronics Technology, Jet Propulsion Laboratory, California Institute of Technology, pp. 498-508.
R. J. Williams et al., "A Learning Algorithm for continually running fully recurrent neural networks," Neural Computation, vol. 1, No. 2 pp. 270-280.
R. J. Williams et al., "A Learning Algorithm for continually running fully recurrent neural networks," Technical Report ICS Report 8805, UCSD, La Jolla, California 92093.
Kumpati S. Narendra fellow, IEE and Kannan Parthasarathy, "Identification and Control of Dynamical Systems Using Neural Networks", vol. 1 No. 1, Mar. 1990.
Masa-aki Sato, "A Learning Algorithm to Teach Spatiotemporal Patterns to Recurrent Neural Networks", Biological Cybernetics, (1990) pp. 259-263.
Masa-aki Sato, "A Real Time Learning Algorithm for Recurrent Analog Neural Networks", Biological Cybernetics, (1990) pp. 237-241.
Fernando J. Pineda, "Time Dependent Adaptive Neural Networks", Center for Microelectronics Technology, Jet Propulsion Laboratory, California Institute of Technology, pp. 710-718.
N. Toomarian and J. Barhen, "Adjoint-Operators and Non-Adiabatic Learning Algorithms in Neural Networks", Appl. Math. Lett., vol. 4 No. 2, pp. 69-73, 1991, printed in Great Britain.
Barak A. Pearlmutter, "Dynamic Recurrent Neural Networks", School of Computer Science, Carnegie Mellon University, Pittsburgh, Pa. 15213. This research was sponsored in part by The Defense Advanced Research Projects Agency, Information Science and Technology Office, under the title Research on Parrallel Computing, ARPA Order No. 7330 issued by DARPA/CMO.
Barak A. Pearlmutter, "Learning State Space Trajectories in Recurrent Neural Networks", Neural Computation 1, pp. 263-269 (1989) Massachusetts Institute of Technology.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Fast temporal neural learning using teacher forcing does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Fast temporal neural learning using teacher forcing, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Fast temporal neural learning using teacher forcing will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-293547

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.