Data processing: artificial intelligence – Neural network – Learning task
Reexamination Certificate
1999-03-18
2002-05-28
Black, Thomas (Department: 2121)
Data processing: artificial intelligence
Neural network
Learning task
C702S181000
Reexamination Certificate
active
06397200
ABSTRACT:
STATEMENT OF GOVERNMENT INTEREST
The invention described herein may be manufactured and used by or for the Government of the United States of America for governmental purposes without the payment of any royalties thereon or therefor.
BACKGROUND OF THE INVENTION
(1) Field of the Invention
The invention relates to a data reduction system that reduces the dimensionality of neural network training data by finding features that most improve performance of the neural network.
(2) Description of the Prior Art
The use of classification systems to classify input data into one of several predetermined classes is well known. Their use has been adapted to a wide range applications including target identification, medical diagnosis, speech recognition, digital communications and quality control systems.
Classification of sonar signals into threats and non-threats is an important task for sonar operators. Neural networks have been proposed to help accomplish this task by receiving a signal from the sonar system and analyzing characteristics of the signal for determining if the signal is originating from a vessel that is a military vessel that represents a threat or from a commercial vessel. Speed in making this determination is often of the essence.
Classification systems decide, given an input X, to which of several output classes X belongs. If known, measurable characteristics separate classes, the classification decision is straightforward. However, for most applications, such characteristics are unknown, and the classification system must decide which output class the input most closely resembles. In such applications, the output classes and their characteristics are modeled (estimated) using statistics for the classes derived from training data belonging to known classes. Thus, the standard classification approach is to first estimate the statistics from the given training data and then to apply a decision rule using these estimated statistics.
However, often there is insufficient training data to accurately infer the true statistics for the output classes which results in reduced classification performance or more occurrences of classification errors. Additionally, any new information that arrives with the input data is not combined with the training data to improve the estimates of the symbol probabilities. Furthermore, changes in symbol probabilities resulting from changes, which may be unobservable, in the source of test data, the sensors gathering data or the environment often result in reduced classification performance. Therefore, if based on the training data, a classification system maintains a near zero probability for the occurrence of a symbol and the symbol begins to occur in the input data with increasing frequency, classification errors are likely to occur if the new data is not used in determining symbol probabilities.
Attempts to improve the classification performance and take advantage of information available in test data have explored combining the test data with the training data in modeling class statistics and making classification decisions. While these attempts have indicated that improved classification performance is possible, they have one or more drawbacks which limit or prevent their use for many classification systems.
The use of Bayseian classification is taught in the prior art for combining training data with test data is found in Merhav et al, “A Bayesian Classification Approach with Application to Speech Recognition,”
IEEE Trans. Signal Processing
, vol. 39, no. 10 (1991) pp. 2157-2166. In Merhav et al classification decision rules which depend on the available training and test data were explored. A first decision rule which is a Bayesian rule was identified. However, this classification rule was not fully developed or evaluated because the implementation and evaluation of the probability density functions required are extremely complex.
It is known in prior art artificial intelligence systems to reduce data complexity by grouping data into worlds with shared similar attributes. This grouping of the data helps separate relevant data from redundant data using a co-exclusion technique. These methods search saved data for events that do not happen at the same time. This results in a memory saving for the systems because only the occurrence of the event must be recorded. The co-exclusive event can be assumed.
Bayesian networks, also known as belief networks are known in the art for use as filtering systems. The belief network is initially learned by the system from data provided by an expert, user data and user preference data. The belief network is relearned when additional attributes are identified having an effect. The belief network can then be accessed to predict the effect.
A method for reducing redundant features from training data is needed for reducing the training times required for a neural network and providing a system that does not require long training times or a randomized starting configuration.
SUMMARY OF THE INVENTION
Accordingly, it is a general purpose and primary object of the present invention to provide a classification system capable of classifying data into multiple classes.
Another object of the invention is that such classification system should not include redundant and ineffectual data.
A further object of the invention is to provide a method for reducing feature vectors to only those values which affect the outcome of the classification.
Accordingly, this invention provides a data reduction method for a classification system using quantized feature vectors for each class with a plurality of features and levels. The reduction algorithm consisting of applying a Bayesian data reduction algorithm to the classification system for developing reduced feature vectors. Test data is then quantified into the reduced feature vectors. The reduced classification system is then tested using the quantized test data.
A Bayesian data reduction algorithm is further provided having by computing an initial probability of error for the classification system. Adjacent levels are merged for each feature in the quantized feature vectors. Level based probabilities of error are then calculated for these merged levels among the plurality of features. The system then selects and applies the merged adjacent levels having the minimum level based probability of error to create an intermediate classification system. Steps of merging, selecting and applying are performed until either the probability of error stops improving or the features and levels are incapable of further reduction.
REFERENCES:
patent: 4959870 (1990-09-01), Tachikawa
patent: 5479576 (1995-12-01), Watanabe et al.
patent: 5572597 (1996-11-01), Chang et al.
patent: 5633948 (1997-05-01), Kegelmeyer, Jr.
patent: 5701398 (1997-12-01), Glier et al.
patent: 5790758 (1998-08-01), Streit
patent: 5796924 (1998-08-01), Errico et al.
patent: 5884294 (1999-03-01), Kadar et al.
patent: 5999893 (1999-12-01), Lynch, Jr. et al.
patent: 6009418 (1999-12-01), Cooper
patent: 6027217 (2000-02-01), McClure et al.
patent: 6035057 (2000-03-01), Hoffman
patent: 6278799 (2001-08-01), Hoffman
Lynch Jr., et al.; “Classification using dirichlet priors when the training data are mislabeled”. 1999 IEEE International Conference on Acoustics, Speech and Signal Processing, Mar. 1999, vol. 5, pp. 2973-2976.*
Lynch Jr., et al.; “Testing the statistical similarity of discrete observations using Dirichlet priors”. 1998 IEEE International Symposium on Information Theory, Aug. 1998, p. 144.*
Lynch Jr., et al.; “Bayesian classification and the reduction of irrelevant features from training data”. Proceedings of the 37thIEEE Conference on Decision and Control, Dec. 1998, vol. 2, pp. 1591-1592.*
Morris et al.; “Some solution to the missing feature problem in data classification, with application to noise robust ASR”. Proceedings of the 1998 IEEE International Conference on Acoustics, Speech and Signal Processing, May 1998, vol. 2, pp. 737-740.*
Huang et al.; “An automatic hierarchical image classification scheme”. Proceedings
Lynch, Jr. Robert S.
Willett Peter K.
Black Thomas
Booker Kelvin
Kasischke James M.
Lall Prithvi C.
McGowan Michael J.
LandOfFree
Data reduction system for improving classifier performance does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Data reduction system for improving classifier performance, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Data reduction system for improving classifier performance will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2895667