Recurrent neural networks teaching system

Patent

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

G06F 1518

Patent

active

051827948

ABSTRACT:
A teaching method for a recurrent neural network having hidden, output and input neurons calculates weighting errors over a limited number of propagations of the network. This process permits the use of conventional teaching sets, such as are used with feedforward networks, to be used with recurrent networks. The teaching outputs are substituted for the computed activations of the output neurons in the forward propagation and error correction stages. Back propagated error from the last propagation is assumed to be zero for the hidden neurons. A method of reducing drift of the network with respect to a modeled process is also described and a forced cycling method to eliminate the time lag between network input and output.

REFERENCES:
Generalization of Back-Propagation to Recurrent Neural Networks; F. J. Pineda; Physical Review Letters; vol. 59, No. 19; pp. 2229-2232; Nov. 9, 1987.
"A Learning Algorithm for Continually Running Fully Recurrent Neural Networks", Ronald J. Willaims, David Zipser, Neural Computation 1,270-280 1989.
"Dynamics and Architecture for Neural Computation", Fernando J. Pineda, Journal of Complexity 4,216-245, 1988.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Recurrent neural networks teaching system does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Recurrent neural networks teaching system, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Recurrent neural networks teaching system will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-1417373

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.