Data processing: artificial intelligence – Neural network – Structure
Reexamination Certificate
1999-10-01
2003-01-28
Starks, Jr., Wilbert L. (Department: 2121)
Data processing: artificial intelligence
Neural network
Structure
C706S034000, C706S039000, C706S040000, C708S801000
Reexamination Certificate
active
06513023
ABSTRACT:
ORIGIN OF THE INVENTION
The invention described herein was made -in the performance of work under a NASA contract, and is subject to the provisions of Public Law 96-517 (35 USC 202) in which the contractor has elected not to retain title.
BACKGROUND
Neural networks offer a computing paradigm that allows a nonlinear input/output relationship or transformation to be established based primarily on given examples of the relationship rather than a formal analytical knowledge of its transfer function. This paradigm provides for a training of the network during which the weight values of the synaptic connections from one layer of neurons to another are changed in an iterative manner to successively reduce error between actual and target outputs.
Typically, for neural networks to establish the transformation paradigm, input data generally is divided into three parts. Two of the parts, called training and cross-validation, must be such that the corresponding input-output pairs (ground truth) are known. During training, the cross-validation set allows verification of the status of the transformation relationship learned by the network to ensure adequate learning has occurred and to avoid over-learning. The third part, termed the validation data, which may or may not include the training and/or the cross-validation data set, is the data transformed into output.
Neural networks may be formed with software, hardware, or hybrid implementations for training connectionist models. One drawback with software techniques is that, because computers execute programmed instructions sequentially, the iterative process can be inconveniently slow and require vast amounts of computing resources to process the large number of connections necessary for most neural network applications. As such, software techniques are not feasible for most applications, and in particular, where computing resources are limited and large amounts of information must be processed.
In one approach for analog implementation of a synapse, the weight is stored as a charge on a capacitor. A problem with representing a weight as a stored charge is that charge leakage changes the weight of the connection. Although there are several approaches to eliminate charge leakage, such as reducing the capacitor's thermal temperature, or increasing its capacitance, they are not practical for most applications. As an alternative, an electrically erasable programmable read only memory or EEPROM may be used. Although this eliminates the charge leakage problem, such a device is too slow for high speed learning networks.
Hybrid systems on the other hand, are able to overcome the problem of charge leakage associated with capacitively stored weights by controlling training and refresh training digitally. In a typical hybrid system, the capacitively stored weight is digitized and monitored with digital circuitry to determine whether more training or whether refresh training is necessary. When necessary, the weight of the neuron is refreshed using the digitally stored target weight.
A significant drawback with hybrid training and refresh approaches is that it is not practical for very large scale neural networks, which are necessary for most applications. This is because A/D and D/A converters must be used for weight quantization. For most training techniques, such as Error Back Propagation, weight quantization of each synaptic link requires at least 12 bit precision, or more, to provide sufficient resolution for simple problems. Such resolution is impractical for most implementations due to expense and size concerns. As such, either the resolution or the processing capability of the neural network usually is sacrificed. Thus, providing such resolution for each neuron of a massive neural network makes this approach impractical for typical applications.
SUMMARY OF THE PREFERRED EMBODIMENTS
In an embodiment of the present invention, a neural network circuit is provided having a plurality of circuits capable of charge storage. Also provided is a plurality of circuits each coupled to at least one of the plurality of charge storage circuits and constructed to generate an output in accordance with a neuron transfer function, along with a plurality of circuits, each coupled to one of the plurality of neuron transfer function circuits and constructed to generate a derivative of the output. A weight update circuit updates the charge storage circuits based upon output from the plurality of transfer function circuits and output from the plurality of derivative circuits.
In preferred embodiments, a training network and a validation network share the same set of charge storage circuits and may operate concurrently. The training network has a plurality of circuits capable of charge storage and a plurality of transfer function circuits each being coupled to at least one of the charge storage circuits. In addition, the training network has a plurality of derivative circuits each being coupled to one of the plurality of transfer function circuits and constructed to generate a derivative of an output of the one transfer function circuit. The validation network has a plurality of transfer function circuits each being coupled to the plurality of charge storage circuits so as to replicate the training network's coupling of the plurality of charge storage to the plurality of transfer function circuits.
Embodiments of each of the plurality of transfer function circuits may be constructed having a transconductance amplifier. The transconductance amplifier is constructed to provide differential currents I
1
and I
2
from an input current I
in
and to combine the differential currents to provide an output in accordance with a transfer function. In such embodiments each of the plurality of derivative circuits may have a circuit constructed to generate a biased I
1
and a biased I
2
, combine the biased I
1
and biased I
2
, and provide an output in accordance with the derivative of the transfer function. In a preferred embodiment, in order to provide the derivative of the transfer function from the biasing and combining circuits and the transconductance amplifier outputs, each of the plurality of derivative circuits has a subtraction circuit.
A preferred method of the present invention is performed by creating a plurality of synaptic weights by storing charge on a plurality of capacitive circuits and generating a plurality of neuron outputs in accordance with a transfer function. The outputs are generated from the plurality of weights using a plurality of transfer function circuits. The derivative of each of the plurality of neuron outputs is generated using a plurality of derivative circuits each coupled to one of the plurality of transfer function circuits. A neural network is trained using a plurality of delta weights which are generated using the plurality of transfer function derivatives.
Furthermore, in a preferred method, a plurality of synaptic weights are established by storing charge on a plurality of capacitive circuits using a training network having a plurality of neurons each capable of providing outputs in accordance with a transfer function. The plurality of weights are shared with a validating network having a second plurality of neurons each capable of providing outputs in accordance with the transfer function. With this method cross-validation testing or validation testing may be performed using the validation network. Also with this method, training the neural network and performing the at least one of cross-validation testing or the validating testing may be performed simultaneously.
Such an approach eliminates the need for digital refresh circuitry and allows the advantages of speed, simplicity, and accuracy provided by analog storage to be exploited.
REFERENCES:
patent: 4866645 (1989-09-01), Lish
patent: 4912652 (1990-03-01), Wood
patent: 4951239 (1990-08-01), Andes et al.
patent: 5039870 (1991-08-01), Engeler
patent: 5039871 (1991-08-01), Engeler
patent: 5093899 (1992-03-01), Hiraiwa
patent: 5113483 (1992-05-01), Keeler et al.
patent: 513056
Starks, Jr. Wilbert L.
The United States of America as represented by the Administrator
LandOfFree
Artificial neural network with hardware training and... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Artificial neural network with hardware training and..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Artificial neural network with hardware training and... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3053958