Data processing: artificial intelligence – Adaptive system
Reexamination Certificate
2000-11-14
2004-04-27
Khatri, Anil (Department: 2121)
Data processing: artificial intelligence
Adaptive system
C706S012000, C706S020000, C706S046000, C706S047000
Reexamination Certificate
active
06728689
ABSTRACT:
FIELD OF THE INVENTION
The present invention relates generally to the fields of data mining or machine learning and, more particularly, to methods and apparatus for generating data classification models.
BACKGROUND OF THE INVENTION
Data classification techniques, often referred to as supervised learning, attempt to find an approximation or hypothesis to a target concept that assigns objects (such as processes or events) into different categories or classes. Data classification can normally be divided into two phases, namely, a learning phase and a testing phase. The learning phase applies a learning algorithm to training data. The training data is typically comprised of descriptions of objects (a set of feature variables) together with the correct classification for each object (the class variable).
The goal of the learning phase is to find correlations between object descriptions to learn how to classify the objects. The training data is used to construct models in which the class variable may be predicted in a record in which the feature variables are known but the class variable is unknown. Thus, the end result of the learning phase is a model or hypothesis (e.g., a set of rules) that can be used to predict the class of new objects. The testing phase uses the model derived in the training phase to predict the class of testing objects. The classifications made by the model is compared to the true object classes to estimate the accuracy of the model.
Numerous techniques are known for deriving the relationship between the feature variables and the class variables, including, for example, Disjunctive Normal Form (DNF) Rules, decision trees, nearest neighbor, support vector machines (SVMs) and Bayesian classifiers, as described, for example, in R. Agrawal et al., “An Interval Classifier for Database Mining Applications,” Proc. of the 18th VLDB Conference, Vancouver, British Columbia, Canada 1992; C. Apte et al., “RAMP: Rules Abstraction for Modeling and Prediction,” IBM Research Report RC 20271, June 1995; J. R. Quinlan, “Induction of Decision Trees,” Machine Learning, Volume 1, Number 1, 1986; J. Shafer et al., “SPRINT: A Scaleable Parallel Classifier for Data Mining,” Proc. of the 22d VLDB Conference, Bombay, India, 1996; M. Mehta et al., “SLIQ: A Fast Scaleable Classifier for Data Mining,” Proceedings of the Fifth International Conference on Extending Database Technology, Avignon, France, March, 1996, each incorporated by reference herein.
Data classifiers have a number of applications that automate the labeling of unknown objects. For example, astronomers are interested in automated ways to classify objects within the millions of existing images mapping the universe (e.g., differentiate stars from galaxies). Learning algorithms have been trained to recognize these objects in the training phase, and used to predict new objects in astronomical images. This automated classification process obviates manual labeling of thousands of currently available astronomical images.
While such learning algorithms derive the relationship between the feature variables and the class variables, they generally produce the same output model given the same domain dataset. Generally, a learning algorithm encodes certain assumptions about the nature of the concept to learn, referred to as the bias of the learning algorithm. If the assumptions are wrong, however, then the learning algorithm will not provide a good approximation of the target concept and the output model will exhibit low accuracy. Most research in the area of data classification has focused on producing increasingly more accurate models, which is impossible to attain on a universal basis over all possible domains. It is now well understood that increasing the quality of the output model on a certain group of domains will cause a decrease of quality on other groups of domains. See, for example, C. Schaffer, “A Conservation Law for Generalization Performance,” Proc. of the Eleventh Int'l Conference on Machine Learning, 259-65, San Francisco, Morgan Kaufmnan (1994); and D. Wolpert, “The Lack of a Priori Distinctions Between Learning Algorithms and the Existence of a Priori Distinctions Between Learning Algorithms,” Neural Computation, 8 (1996), each incorporated by reference herein.
While conventional learning algorithms produce sufficiently accurate models for many applications, they suffer from a number of limitations, which, if overcome, could greatly improve the performance of the data classification and regression systems that employ such models. Specifically, the learning algorithms of conventional data classification and regression systems are unable to adapt over time. In other words, once a model is generated by a learning algorithm, the model cannot be reconfigured based on experience. Thus, the conventional data classification and regression systems that employ such models are prone to repeating the same errors.
Our contemporaneously filed patent application discloses a data classification system that adapt a learning algorithm through experience. The disclosed data classification system employs a meta-learning algorithm to dynamically modify the assumptions of the learning algorithm embodied in the generated models. The meta-learning algorithm utilized by the data classification system, however, has a fixed bias. Since modifying the assumptions of the learning algorithm inevitably requires further assumptions at the meta-level, it appears that an infinite chain of modifications is necessary to produce adaptive learning algorithms. A need therefore exists for a method and apparatus for adapting both the learning algorithm and the meta-learning algorithm through experience.
SUMMARY OF THE INVENTION
Generally, a data classification method and apparatus are disclosed for labeling unknown objects. The disclosed data classification system employs a learning algorithm that adapts through experience. The present invention classifies objects in domain datasets using data classification models having a corresponding bias and evaluates the performance of the data classification. The performance values for each domain dataset and corresponding model bias are processed to initially identify (and over time modify) one or more rules of experience. The rules of experience are then subsequently used to generate a model for data classification. Each rule of experience specifies one or more characteristics for a domain dataset and a corresponding bias that should be utilized for a data classification model if the rule is satisfied.
Thus, the present invention dynamically modifies the assumptions (bias) of the learning algorithm to improve the assumptions embodied in the generated models and thereby improve the quality of the data classification and regression systems that employ such models. Furthermore, since the rules of experience change dynamically, the learning process of the present invention will not necessarily output the same model when the same domain dataset is presented again. Furthermore, the disclosed self-adaptive learning process will become increasingly more accurate as the rules of experience are accumulated over time.
According to another aspect of the invention, a fixed or dynamic bias can be employed in the meta-learning algorithm. Generally, a dynamic bias may be employed in the meta-learning algorithm, without introducing an infinite chain, by utilizing two self-adaptive learning algorithms, where each of the two self-adaptive learning algorithms has two functions. In a first function, each self-adaptive learning algorithm generates models used for data classification. In a second function, each self-adaptive learning algorithm serves as an adaptive meta-learner for the other adaptive learning algorithm.
REFERENCES:
patent: 5701400 (1997-12-01), Amado
patent: 5787234 (1998-07-01), Molloy
patent: 5870731 (1999-02-01), Trif et al.
patent: 6169981 (2001-01-01), Werbos
patent: 6249781 (2001-06-01), Sutton
patent: 6581048 (2003-06-01), Werbos
Breiman, “Bagging Predictors,” Machine Learning, 23 123-140 (1996).*
Gama et
Drissi Youssef
Vilalta Ricardo
Holmes Michael B.
Khatri Anil
Percello Louis J.
Ryan & Mason & Lewis, LLP
LandOfFree
Method and apparatus for generating a data classification... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Method and apparatus for generating a data classification..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and apparatus for generating a data classification... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3237299