Patent
1994-12-21
1997-05-13
Davis, George B.
395 24, G06F 1518
Patent
active
056300204
ABSTRACT:
The invention relates to a method of learning which is carried out in a neural network operating on the basis of the gradient back-propagation algorithm. In order to determine the new synaptic coefficients with a minimum learning period, the invention introduces parameters which privilege corrections based on the sign of the error at the start of learning and which gradually induce less coarse corrections. This can be complemented by other parameters which favor a layer-wise strategy, accelerating the learning in the input layers with respect to the output layers. It is also possible to add a strategy which acts on the entire neural network.
REFERENCES:
patent: 4933872 (1990-06-01), Vandenberg et al.
patent: 4994982 (1991-02-01), Duvanton et al.
"Parallel Distributed Processing", vol. 1, David E. Rumelhart, 1989.
"Neural Computing: Theory and Practice", Philip D. Wasserman, Apr. 1989.
"An Analog VLSI Implementation of Hopfield's Neural Network", 1989 IEEE, Michel Verleysen and Paul G. A. Jespers.
"Neural Network, Part 2: What are They and Why is Everybody So Interested in Them Now", Philip D. Wasserman.
Davis George B.
Haken Jack E.
U.S. Philips Corporation
LandOfFree
Learning method and neural network structure does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Learning method and neural network structure, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Learning method and neural network structure will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-1392223