Patent
1992-06-23
1995-02-14
MacDonald, Allen R.
G06F 1518
Patent
active
053902840
ABSTRACT:
A neural network (100) has an input layer, a hidden layer, and an output layer. The neural network stores weight values which operate on data input at the input layer to generate output data at the output layer. An error computing unit (87) receives the output data and compares it with desired output data from a learning data storage unit (105) to calculate error values representing the difference. An error gradient computing unit (81) calculates an error gradient, i.e. rate and direction of error change. A ratio computing unit (82) computes a ratio or percentage of a prior conjugate vector and combines the ratio with the error gradient. A conjugate vector computing unit (83) generates a present line search conjugate vector from the error gradient value and a previously calculated line search gradient vector. A line search computing unit (95) includes a weight computing unit (88) which calculates a weight correction value. The weight correction value is compared (18) with a preselected maximum or upper limit correction value (.kappa.). The line search computing unit (95) limits adjustment of the weight values stored in the neural network in accordance with the maximum weight correction value.
REFERENCES:
patent: 5073867 (1991-12-01), Murphy et al.
patent: 5093899 (1992-03-01), Hiraiwa
patent: 5168550 (1992-12-01), Sakaue et al.
patent: 5228113 (1993-07-01), Shelton
Woodland, "Weight Limiting, Weight Quantisation, & Generalisation in Multi-Layer Perceptions," First IEE Int'l Conf on Artificial NN, pp. 297-300 Oct. 16-18, 1989.
Ghiselli-Crippa et al, "A Fast Neural Net Training Algorithm and its Application to Voice-Unvoiced-Silence Clasification of Speech", Int'l Conf on Acoustics, Speech & Sig Proc, May 14-17 1991 vol. 1 pp. 441-444.
Garcia et al, "Optimization of Planar Devices by the Finite Element Method", IEEE Transactions on Microwave Theory and Techniques, Jan. 1990 vol. 38 ISS 1 pp. 48-53.
Kruschke et al, "Benefits of Gain: Speeded Learning and Minimal Hidden Layers in Back-Propagation Networks" IEEE Transaction on Systems, Man and Cybernetics, Jan.-Feb. 1991 vol. 21 ISS 1 pp. 273-280.
Goryn et al, "Conjugate Gradient Learning Algorithms For Multi-Layer Perceptions", Proc of 32nd Midwest Symposium on Circuits & Systems, Aug. 14-16 1989 vol. 2 pp. 736-739.
Jones et al, "Optimization Techniques Applied to Neural Networks: Line Search Implementation for Back-Propagation", Int'l Joint Conf on Neural Networks, Jun. 17-21 1990, vol. 3 pp. 933-939.
Rumelhart, D. E., et al. Parallel Distributed Processing Explorations in the Microstructure of Cognition Volume 1: Foundations, The MIT Press, Cambridge Mass., 1988, pp. 322-331. (English).
Makram-Ebeid, S., et al., "A Rationalized Error Back-Propagation Learning Algorithm," IJCNN, 1989, pp. II-373-380. (English).
Abe Masahiro
Higashino Jun'ichi
Ogata Hisao
Sakou Hiroshi
Hitachi , Ltd.
MacDonald Allen R.
Shapiro Stuart B.
LandOfFree
Learning method and apparatus for neural networks and simulator does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Learning method and apparatus for neural networks and simulator , we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Learning method and apparatus for neural networks and simulator will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-294170