Multi-winners feedforward neural network

Data processing: artificial intelligence – Neural network

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

Reexamination Certificate

active

06463423

ABSTRACT:

BACKGROUND OF THE INVENTION
The present invention relates to a multi-winners feedforward neural network. More to particularly this invention relates to a neural network which is used for a leaning type pattern recognition device such as character recognition, voice recognition, picture recognition and so forth.
DESCRIPTION OF THE PRIOR ART
All kinds of models are proposed as architecture for realizing neural network which is constructed by neuron-elements with learning function. Such the kinds of models are the perceptron, the recurrent type network, the Hopfield network, the Neo-cognitron, the error back-propagation method, the self-organizing map method, and so forth. The individual function of these models can be achieved by the technique of the analog or digital circuit, further, the individual function of these models can be achieved by the technique of the program in which function is described using the processor as the fundamental circuit. Furthermore, the individual function of these models can be achieved by the both techniques which are combined.
In the neural network system that the analog circuit is taken to be fundamentals, the neuron-element which processes input signal obtains the product-sum between a plurality of input signals and the weights with respect to individual input signal as the state-function of neurons. The product-sum is obtained by using an analog multiplier such as Gilbert amplifier and so forth, or an adder. The output of the product-sum is compared with a threshold value held by respective neuron-elements using a comparator. Then these results are processed in function transformation which is carried out according to the designated function such as Sigmoid function and so forth to output a result of processing of function transformation. Concretely, in “Analog VLSI and Neural System” by Carve Mead, 1989, or in “Neural Network LSI” by Atsushi Iwata and Yoshihito Amemiya, 1996 published by the Institute of Electronics, Information and Communication Engineers, above described techniques are described.
In the latter system that the processor circuit is fundamentals, all kinds of processing are described by the program. All kinds of processing such as the product-sum function of neurons, the comparison function of neurons, the function-transformation function of neurons and so forth are described by the program, thus being distributed to a plurality of processors to be calculated in parallel.
Furthermore, characteristic functions of a part of neuron-element such as the product-sum function, the pulse output function and so forth are realize by the analog circuit, before carrying out analog digital conversion so that remaining functions are processed in digital method, namely the processing is arranged by the method of compromise between the analog method and the digital method.
When LSI (Large Scale Integration) is constituted by the neuron-element in the conventional system, it is necessary to carry out learning in the neural network with respect to all kinds of subjects in every problem without difficulty. In order to realize flexible adaptation of the neural network as the LSI to the problem, it is desirable that respective neurons are capable of learning the subjects autonomously (referring to unsupervised learning system).
As the unsupervised learning method, Hebbian learning system and self-organizing map method et. al. are known. In the Hebbian learning method that competitive leaning between neurons in the same layer is carried out, some neurons in a group within the neural network are constituted in layered shape. Only one cell of neurons in the respective layers is permitted to be excitatory as the winner. As is clear from the conventional achievements that self-learning is capable according to a single-winner method, hereinafter called as winner-take-all method, due to competition between neurons in the same layer. In the self-organizing map method of Kohonen, a winner-take-all is selected. The learning is carried out regarding only the neurons adjacent to the winner. However, according to the winner-take-all method, it is necessary to provide the neurons for the number of items which are intended to classify to be extracted as the special feature, thus there is the problem that the number of the neurons of the output layer is dissipated.
On the other hand, the error back-propagation method denotes suitable learning efficiency as a supervised learning. The error back-propagation method obtains the different error between an output obtained from the presentation of an input signal and teacher signal expected as an output, before carrying out the learning while changing the weight of the intermediate hidden layer of the front stage, and the weights of the output layer so as to render the error small. The error back-propagation method does not require the winner-take-all method, therefore there is no problem of dissipation in connection with the number of the neurons. However, the error back-propagation method requires teacher signal, and requires complicated calculation for updating the weights at the time of learning, therefore, there is the problem that the circuit of the data processor of the neurons in case of making into the LSI become complicated.
There are disclosed “A Self-Organizing Neural Network Using Multi-Winners Competitive Procedure” and “Multi-Winners Self-Organizing Neural Network” by Gjiongtao HUANG and Masafumi HAGIWARA. The former is published in 1995 as the report of THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS (TECHNICAL PEPORT OF IEICE, NC 94-94 pp 43-150). The later is published in 1997 as the report of THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS (TECHNICAL PEPORT OF IEICE, NC 96-156 pp 7-14). These are the method for reducing the number of the neurons while permitting multi-layers. The multi-winners self-organizing neural network according to HUANG and HAGIWARA and so forth obtains the sum total of the output of the competitive layer at the time of updating learning of the weights. This neural network prevents dissipation and divergence of the weights while normalizing the output of the competitive layer by using the obtained the sum total. However complicated procedure is required for normalization in that the sum total of the output should be obtained, before normalizing respective output values with obtained the sum total, thus there is the problem that it causes the circuit of the neurons to be complicated at the time of making into the LSI.
SUMMARY OF THE INVENTION
In view of the foregoing, it is an object of the present invention, in order to overcome the above-mentioned problems, to provide a multi-winners feedforward neural network in which the neural network is constitute using neuron-element which is obtained while causing the neural network to be made into model, and which can carry out learning autonomously with respect to any problem, and which can be constituted by small number of LSI circuit simply.
According to a first aspect of the present invention, in order to achieve the above-mentioned objects, there is provided a multi-winners feedforward neural network in which a plurality of neurons constituting hierarchical structure of one layer and/or plural layers which neural network has a controller for controlling the number of firing of the neurons, in which the number of firing of the neurons is more than two, in such a way that the number of firing of the neurons is restrained depending on a specified value and/or range of the specified value in every layer.
Consequently, there are more than two pieces of neurons which are excited in respective layers, therefore, it is not necessary to carry out complicated processing of the well known winner-take-all method of one excited neuron. The multi-winner feedforward neural network of two neurons is capable of being realized simply on the ground that a threshold value is set in every respective layers for causing the neuron to be excited.
According to a second aspect of the present invention, in the firs

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Multi-winners feedforward neural network does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Multi-winners feedforward neural network, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Multi-winners feedforward neural network will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2937762

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.