Parallel multi-value neural networks

Boots – shoes – and leggings

Patent

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

395 21, 395 23, 3647462, G06F 1518

Patent

active

057684766

DESCRIPTION:

BRIEF SUMMARY
BACKGROUND OF THE INVENTION

The present invention relates to parallel multi-value neural networks which are applied to large scale multi-value logic circuits, pattern recognition, associative memories, code conversions and image processing, providing a desired multi-value output signal with stable and rapid convergence.
In the prior art, a multi-layered neural network and a mutually interconnected neural network such as a Hopfield network have been applied to various information processing as described in text books, for instance, "Neural Network Information Processing" by H. Aso and "Parallel Distributed Processing" by D. E. Rumelhart, MIT Press. FIG. 1 illustrates a 3-layered neural network 1 which has an input layer 4, a hidden layer 5 and an output layer 6 for the execution process. As is well known, the N elements of an input signal I are input to the corresponding units of the input layer 4 through an input terminal 2. The weighted outputs of units of the input layer 4 are added together, and a threshold value is subtracted from the result, and the result is then fed through a sigmoidal function into units of the next layer. The output signal O is obtained from the M output units of the output layer 6 through an output terminal 3.
FIG. 2 illustrates a mutually interconnected neural network having one layer with N units. The network has hidden units and units connected with the input and the output through weighting factors. At an equilibrium state, the output signal O is obtained through the terminal 3.
FIG. 3 illustrates a configuration of the 3-layered neural network 1 in a learning process. A back propagation algorithm is well known for updating weighting factors to minimize the power of the difference between a given teacher signal and the output signal of the output layer 6 for a training input signal fed to the input layer 4 through the terminal 2. The difference obtained through a subtracter 9 is fed into a weighting factor controller 10 which calculates the amount of adjustment of weighting factors for the 3-layered neural network 1. The weighting factors from the terminal 11' are updated and fed back into the neural network 1 through the terminal 11. When the neural network 1 converges in the learning process, the output signal for the training input signal becomes very close to the teacher signal. The back propagation algorithm has however the following disadvantages. For example, the neural network 1 was frequently trapped in a state having local minima which are sub-optimum for minimizing the difference power, and could not easily release the state even if the number of training cycles was increased. It is therefore difficult to make the neural network 1 converge completely without errors between the teacher signal and the output signal within a small number of training cycles. The same situation also occurred for a multi-value teacher signal.
After falling into the state with the local minima, the learning process cannot advance effectively, thereby wasting training cycles. Heuristic approaches are only applied to conventional neural networks by changing the initial conditions of the weighting factors and/or increasing the number of hidden units or the layers, resulting in a huge increase of calculation and a complex hardware configuration. The achievement of rapid and stable convergence without initial weighting factor sensitivity is generally one of the major issues in neural networks having a small number of hidden units and/or multi-layers for a wide range of applications.
FIG. 4 illustrates a configuration of the mutually interconnected neural network 7 in the learning process. In the mutually interconnected neural network 7, an equilibrium state of the network becomes stable with optimum weighting factors, and provides the output signal for the training input signal. In a weighting factor processor 12, optimum weighting factors are calculated from a stored initial input signal S including a teacher signal to provide the equilibrium state having a minimum energy, and then are set i

REFERENCES:
patent: 3659089 (1972-04-01), Payne
patent: 3798606 (1974-03-01), Henle
patent: 4296494 (1981-10-01), Ishikawa
patent: 4949293 (1990-08-01), Kawamura
patent: 5053974 (1991-10-01), Penz
patent: 5095443 (1992-03-01), Watanabe
patent: 5216750 (1993-06-01), Smith
patent: 5227993 (1993-07-01), Yamakawa
patent: 5253328 (1993-10-01), Hartman
patent: 5293457 (1994-03-01), Arima
patent: 5309525 (1994-05-01), Shimomura
patent: 5329611 (1994-07-01), Pechanek
patent: 5524178 (1996-06-01), Yokono

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Parallel multi-value neural networks does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Parallel multi-value neural networks, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Parallel multi-value neural networks will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-1736620

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.