Data processing: artificial intelligence – Neural network – Structure
Reexamination Certificate
1998-08-20
2001-09-11
Powell, Mark P. (Department: 2122)
Data processing: artificial intelligence
Neural network
Structure
C706S025000, C706S027000
Reexamination Certificate
active
06289330
ABSTRACT:
FIELD OF INVENTION
Generally, the present invention relates to the field of parallel processing neurocomputing systems and more particularly to real-time parallel processing in which learning and performance occur durling a sequence of measurement trials.
BACKGROUND
Conventional statistics software and conventional neural network software identify input-output relationships during a training phase, and each apply the learned input-output relationships during a performance phase. For example, during the training phase a neural network adjusts connection weights until known target output values are produced from known input values. During the performance phase, the neural network uses connection weights identified during the training phase to impute unknown output values from known input values.
A conventional neural network consists of simple interconnected processing elements. The basic operation of each processing element is the transformation of its input signals to a useful output signal. Each interconnection transmits signals from one element to another element, with a relative effect on the output signal that depends oil the weight for the particular interconnection. A conventional neural network may be trained by providing known input values and output values to the network, which causes the interconnection weights to be changed.
A variety of conventional neural network learning methods and models have been developed for massively parallel processing. Among these methods and models, back propagation is the most widely used learning method and the multi-layer perceptron is the most widely used model. Multi-layer perceptrons have two or more processing element layers, most commonly an input layer, a single hidden layer and an output layer. The hidden layer contains processing elements that enable conventional neural networks to identify nonlinear input-output relationships.
Conventional neural network learning and performing operations can be performed quickly during each respective stage, because neural network processing elements can perform in parallel. Conventional neural network accuracy depends on data predictability and network structure that are pre-specified by the user, including the number of layers and the number of processing elements in each layer.
Conventional neural network learning occurs when a set of training records is imposed on the network, with each such record containing fixed input and output values. The network uses each record to update the network's learning by first computing network outputs as a function of the record inputs along with connection weights and other parameters that have been learned up to that point. The weights are then adjusted depending on the closeness of the computed output values to the training record output values. For example, suppose that a trained output value is 1.0 and the network computed value is 0.4. The network error will be 0.6 (1.0−0.4=0.6), which will be used to determine the weight adjustments necessary for minimizing the error. Training occurs by adjusting weights in the same way until all such training records have been used, after which the process is repeated until all error values have been sufficiently reduced.
Conventional neural network training and performance phases differ in two basic ways. While weight values change during training to decrease errors between training and computed outputs, weight values are fixed during the performance phase. Additionally, output values are known during the training phase, but output values can only be predicted during the performance phase. The predicted output values are a function of performance phase input values and connection weight values that were learned during the training phase.
While input-output relationship identification through conventional statistical analysis and neural network analysis may be satisfactory for some applications, both Such approaches have limited utility in other applications. Effective manual data analysis requires extensive training and experience, along with time-consuming effort. Conventional neural network analysis requires less training and effort, although the results produced by conventional neural networks are less reliable and harder to interpret than manual results.
A deficiency of both conventional statistics methods and conventional neural network methods results from the distinct training and performance phases implemented by each method. Requiring two distinct phases causes considerable learning time to be spent before performance can begin. Training delays occur in manual statistics methods because even trained expert analysis takes considerable time, and training delays occur in neural network methods because many training passes through numerous training records are needed. Thus, conventional statistical analysis is limited to settings where (a) delays are acceptable between the time learning occurs and the time learned models are used, and (b) input-output relationships are stable between the time training analysis begins and performance operations begin.
Thus, there is a need in the art for an information processing system that may operate quickly to either learn or perform or both within a time trial.
SUMMARY OF THE INVENTION
Generally described, the present invention provides a data analysis system that receives measured input values for variables during a time trial and learns relationships among the variables gradually by improving learned relationships from trial to trial. Additionally, in some embodiments if any input values are missing, the present invention provides, during the time trial, an (imputed) output value for the missing value that are based on the prior learned relationships among the analyzed variables.
More particularly, an embodiment of the present invention may provide the imputed values by implementing a mathematical regression analysis of feature values that are functions of the input values. The regression analysis may be performed by utilizing a matrix of connection weights to predict each feature value as a weighted sum of other feature values. Connection weight elements are updated during each trial to reflect new connection weight information from trial input measurements. Also, a component learning weight is also utilized during each trial that determines the amount of impact that the input measurement vector has on learning relative to prior vectors received.
Embodiments of the present invention may process the input values in parallel or process the values sequentially. The different input values may be provided in the form of vectors. Each of the values of the input feature vector is operated on individually with respect to prior learned parameters. In the parallel embodiment, a plurality of processors process the input values, with each processor dedicated to receive a specific input value from the vector. That is, if the system is set up to receive sixteen input feature values (i.e., corresponding to a vector of length sixteen), sixteen processing units are used to process each of the input feature values. In the sequential embodiment, one processor is provided to successively process each of the input feature values.
In a parallel embodiment of the present invention, each of the processing units is operative to receive, during a time trial, individual input values from an input vector. A plurality of conductors connect each of the processing units to every other processing unit of the system. The conductors transfer weighted values among each of the processor units according to processes of the present invention. Each of the processing units provide, during the time trial, an imputed output value based upon the weighted values. Also, during the same time trial, each of the processing units is operative to update connection weights for computing the weighted values based on the input values received.
Due to the limited number of outputs that a particular processor may drive, when interconnecting many processing units in parallel for the processing of data,
Booker Kelvin
Gardner Groff Mehrman & Josephic P.C.
Mehrman Michael J.
Netuitive, Inc.
Powell Mark P.
LandOfFree
Concurrent learning and performance information processing... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Concurrent learning and performance information processing..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Concurrent learning and performance information processing... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2461356