Data processing: artificial intelligence – Neural network – Structure
Reexamination Certificate
1997-11-19
2001-04-10
Hafiz, Tariq R. (Department: 2762)
Data processing: artificial intelligence
Neural network
Structure
C706S025000, C382S103000, C382S104000
Reexamination Certificate
active
06216119
ABSTRACT:
TECHNICAL FIELD
This invention relates to neural network information processing systems and, more particularly, relates to a multi-kernel neural network computing architecture configured to learn correlations among feature values as the network monitors and imputes measured input values and also predicts future output values.
BACKGROUND OF THE INVENTION
During the on-going boom in computer technology, much of the attention has focused on sequential information processing systems, such as those found in a wide range of computing systems ranging from hand-held personal computers to large mainframe computers. In general, most “flat file” sequential information processing systems can be very effective at performing tasks for which the inputs, outputs, and operations are known in advance. But they are less well suited to performing adaptive tasks in which the inputs, outputs, and operations change over time in response to changing environmental factors, changing physical characteristics, and so forth. In other words, typical “flat file” sequential information processing systems are not well suited to performing tasks that involve learning.
Neural networks are a category of computer techniques that may be used to implement learning systems. In particular, neural network computer architectures have been developed to simulate the information processes that occur in thinking organisms. Neural network techniques are often implemented using dedicated hardware processors, such as parallel-processing logic arrays. Generally described, a neural network is a system of interconnected nodes having inputs and outputs in which an output of a given node is driven by a weighted sum of the node's inputs. A neural network is well suited to monitoring, forecasting, and control applications in which the input and output values correspond to physical parameters that can be measured during a series of time trials. Monitoring and forecasting the same values allows the relationships among the input and output values to be learned through empirical analysis applied to measured input and output values. The learned relationships may then be applied to predicted output values from measured input values.
To apply a typical neural network system to a physical application, the neural network is configured with appropriate inputs and outputs for the given application. Once constructed, the network is exposed during a training phase to a series of time trials including measured values for both the inputs and the outputs. Through empirical analysis during the training phase, the network learns the relationships among the measured inputs and outputs. After the network has been trained, it may be used during subsequent time trials in a predicting phase to compute predicted outputs from measured inputs. That is, during the predicting phase the network uses the measured inputs to compute predicted outputs based on the relationships learned during the training phase. In a forecasting application, the network typically receives measurements corresponding to the output values during future time trials. These measured output values are then compared to the predicted output values to measure the performance, or predicting accuracy, of the network.
The neural network may also be retrained from time to time, resulting in a training-predicting operating cycle. Although this type of conventional neural network can effectively apply learned input-output relationships to perform a predictive analysis, the network requires a distinct training phase before a predictive analysis can be performed. The network is not, in other words, capable of learning relationships during the predicting phase. By the same token, the network is not capable of conducting a predicting analysis during the training phase. This drawback limits the usefulness of conventional neural networks in certain situations.
In particular, the inability of conventional neural networks to learn and predict simultaneously limits the effectiveness of these networks in applications in which the relationships between inputs and outputs should be ascertained as quickly as possible, but it is not known how many time trials will be required to learn the relationships. In this situation, it is difficult to determine how many time trials will be adequate to train the network. Similarly, conventional neural networks are not well adapted to applications in which the relationships between inputs and outputs can change in an unknown or unpredictable way. In this situation, it is difficult to determine when to retrain the neural network.
As a result, conventional neural networks experience important limitations when applied to monitoring, forecasting, and control tasks in whichthe relationships between the inputs and outputs must be ascertained very quickly and in which the relationships between the inputs and outputs change in an unknown or unpredictable manner. Of course, many monitoring, forecasting, and control tasks fall into these categories. For example, machines such as missile controllers and information packet routers experience rapid changes in the input-output relationships that should be ascertained very quickly. Other monitoring and control operations involving machines that may experience fundamental failures, such as a structural member collapsing or a missile veering out of control, often exhibit input-output relationships that change in an unknown or unpredictable manner.
Conventional neural networks also experience limited effectiveness in applications in which input-output relationships change over time in response to changing factors that are unmeasured and, in many cases, unmeasurable. For example, a commodity price index can be expected to change over time in an unpredictable manner in response to changing factors such as inventory levels, demand for the commodity, the liquidity of the money supply, the psychology of traders, and so forth. Similarly, the relationships between electricity demand and the weather can be expected to change over time in an unpredictable manner in response to changing factors such as demographics, the technology of heating and cooling equipment, economic conditions, and the like.
Another limitation encountered with conventional neural networks stems from the fact that the physical configuration of the network is typically tailored for a particular set of inputs and outputs. Although the network readily learns the relationships among these inputs and outputs, the network is not configured to redefining its inputs and outputs in response to measured performance. This is because the input-output connection weights applied by the network may change when the network is retrained, but the inputs and outputs remain the same. Without an effective input-output refinement process, the network cannot identify and eliminate ineffective or redundant inputs and outputs. As a result, the network cannot adapt to changing conditions or continually improve prediction for a particular application.
Interestingly, the two shortcomings associated with conventional neural networks described above—the inability to learn and predict simultaneously and the lack of an effective input-output refinement process—are shortcomings that have apparently been overcome in even the most rudimentary thinking organisms. In fact, the ability to predict and learn simultaneously is an important aspect of an awake or cognitive state in a thinking organism. And the ability to allocate increasing amounts of input-output processing capacity in response to repetition of a task is an important aspect of learning in a thinking organism. Practice makes perfect, so to speak. As a result, conventional neural networks that lack these attributes experience important limitations in simulating the intelligent behavior of thinking organisms.
Accordingly, there is a general need in the art for monitoring, forecasting, and control systems that simultaneously learn and predict. There is a further need in the art for monitoring, forecasting, and control techniques that include effective
Gardner & Groff, P.C.
Hafiz Tariq R.
Mehrman Michael J.
Netuitive, Inc.
Starks Wilbert L.
LandOfFree
Multi-kernel neural network concurrent learning, monitoring,... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Multi-kernel neural network concurrent learning, monitoring,..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Multi-kernel neural network concurrent learning, monitoring,... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2458004