Data processing: artificial intelligence – Neural network – Structure
Patent
1995-02-24
1998-10-13
Downs, Robert W.
Data processing: artificial intelligence
Neural network
Structure
706 20, 706 25, 382159, 382181, 382185, G06F 1518
Patent
active
058227423
DESCRIPTION:
BRIEF SUMMARY
BACKGROUND OF THE INVENTION
The present invention relates to a dynamically stable associative learning neural network system involving neuron circuits and networks and, more particularly, to a neuron circuit employing a novel learning rule which enables associative learning, including correlations and anti-correlations, with decreased computational time and complexity.
Efforts have been made to use neural networks to emulate human-like performance in the areas of pattern recognition including pattern-recognition applications in speech or image recognition, classification systems, and adaptive control systems. The basic computational element of a neural network is the neuron circuit which typically has multiple input lines and a single output line. The output response of a neuron circuit is generally a nonlinear function of the sum of the signal amplitudes on its input lines, with an output response being triggered when the sum of the input signal amplitudes exceeds a threshold value. The output of a neuron circuit may be coupled to the input of more than one other neuron circuit.
A neural network is formed by interconnecting the neuron circuits through synapses, each of which has an associated weight for modifying any signal passing through it. The amplitude of a signal exiting a synapse is thus the product of the weight associated with that synapse and the amplitude of the signal entering the synapse. A synapse may be either excitatory, that is, its weight is positive because it contributes to production of a signal by the associated neuron circuit, or inhibitory, that is, its weight is negative.
The output end of each synapse terminates at an input line to a neuron circuit, with the other end connected to either the output line of another neuron circuit or to a primary input (i.e., the receptor) to the neural network. The primary outputs of a neural network are each derived from a single output line of one of the neuron circuits in the system. With such loose restrictions, a large number of differently configured neural networks can be formed by simply varying the synaptic connections between neuron circuits.
The two major classes of artificial neural networks developed are (1) single layer networks in which a set of neuronal elements are fully interconnected with each other and which function as associators, and (2) multilayer perceptrons in which all interconnections are feed-forward connections between layers and which function as pattern classifiers. For the networks in these two classes, the basic neuronal models used are variants of a concept in which (1) "synaptic" inputs to an element are summed and the element fires if a threshold is exceeded, and (2) the weight or strength of a synaptic junction is increased only if both the presynaptic and post-synaptic elements fire. Essentially, all of the currently popular neural network designs implicitly or explicitly use the element to be formed in the adjustment of the weights on its input. In most implementations, a nonlinear optimization or relaxation algorithm is used for setting weights. These algorithms are all computationally intensive, since for each set of inputs to be learned many iterations are required before convergence is achieved and the network has learned the input patterns. Depending on the details of the implementation, the computational complexity of these algorithms (reflected in the number of iterations needed to achieve convergence) is between O(N.sup.2) and O(N.sup.3) where N is the number of weights (connections) to be determined. Consequently, the total computational effort per connection increases as the number of connections is increased. A survey of neural networks can be found in R. P. Lippmann, "An Introduction to Neural Nets," IEEE ASSP Magazine, Pgs. 4-21, Apr., 1987 and in Neural Networks for Control, 1992.
Neural networks are taught by successive presentation of sets of signals to their primary inputs with each signal set derived from a pattern belonging to a class of patterns, all having some common features or charac
REFERENCES:
patent: 5040230 (1991-08-01), Takatori et al.
patent: 5222195 (1993-06-01), Alkon et al.
patent: 5335291 (1994-08-01), Kramer et al.
patent: 5359700 (1994-10-01), Seligson
patent: 5588091 (1996-12-01), Alkon et al.
R.P. Lippmann, "An Introduction to Computing with Neural Nets," IEEE ASSP Magazine, pp. 4-22, Apr. 1987.
J. Moody and C. Darken, "Learning with Localized Receptive Fields," Proc. 1988 Connectionist Models Summer School, pp. 133-143, Dec. 1988.
R.P. Lippmann, "Pattern Classification Using Neural Networks," IEEE Communications Magazine, vol. 27 (11), pp. 47-64, Nov. 1989.
S. Wolpert, et al., "A VLSI-Based Gaussian Kernel Mapper for Real-Time RBF Neural Networks," 1992 18th Annual Northwest Conf. on Bioengineering, pp. 51-52, Mar. 1992.
G. Dorffner, "Cascaded Associative Networks for Complex Learning Tasks," Neural Networks from Models to Applications, pp. 244-253, Dec. 1988.
H. R. Berenji and P.S. Khedkar, "Clustering in Product Space for Fuzzy Inference," Int'l. Conf. on Fuzzy Systems, vol. 2, pp. 1402-1407, Mar. 1993.
Vogl, IJCNN-91, International Joint Conference on Neural Networks, 1:97-102, Jul. 8, 1991.
Alkon Daniel L.
Barbour Garth S.
Blackwell Kim T.
Vogl Thomas P.
Downs Robert W.
ERIM International, Inc.
The United States of America as represented by the Secretary of
LandOfFree
Dynamically stable associative learning neural network system does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Dynamically stable associative learning neural network system, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Dynamically stable associative learning neural network system will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-327033