Classification method and apparatus based on boosting and...

Data processing: artificial intelligence – Neural network – Learning task

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C706S016000, C706S025000, C704S009000

Reexamination Certificate

active

06456991

ABSTRACT:

TECHNICAL FIELD
The present invention relates to neural networks used for pattern recognition. More particularly, the invention disclosed herein provides a highly accurate classification method and apparatus for position and recognition sensing which uses a boosting and pruning approach for adaptive resonance theory (ART) based neural networks. The present invention is chiefly applicable to pattern recognition problems such as automobile occupant type and position recognition and hand signal recognition.
BACKGROUND OF THE INVENTION
In the recent past, research has been applied to the use of artificial neural networks (ANN) as a nonparametric regression tool for function approximation of noisy mappings. ANNs have been successfully applied in a large variety of function approximation applications including pattern recognition, adaptive signal processing, and the control of highly nonlinear dynamic systems. In pattern recognition applications, ANNs are used to construct pattern classifiers that are capable of separating patterns into distinct classes. In signal processing and control applications ANNs are used to build a model of physical system based on data in the form of examples that characterize the behavior of the system. In this case, the ANN is essentially used as a tool to extract the mapping between the inputs and outputs of the system without making assumptions about its functional form.
The most common type of ANN used in function approximation problems is the feedforward type. Although these networks have been successfully used in various applications, their performance is dependent on a problem-specific crafting of network architecture (e.g. the number of hidden layers and the number of nodes in each hidden layer) and network parameters (e.g. learning rate). These networks operate in a batch-processing mode (or an off-line mode), where the entire training data are presented in training epochs until the mean square energy of the network is minimized to a user-defined level by adjusting the weights of the network. These weight adjustments (or learning) are typically based on some form of gradient descent and are prone to be stuck in local minima. Thus, there is no guarantee of network convergence to the desired solution. Further, once the network has been trained, the only way to accommodate new training data is to retrain the network with the old and new training data combined.
Adaptive resonance architectures are neural networks that self-organize stable recognition categories in real time in response to arbitrary sequences of input patterns. The basic principles of adaptive resonance theory (ART) were introduced in Grossberg, “Adaptive pattern classification and universal recoding, II: Feedback, expectation, olfaction, and illusions.”
Biological Cybernetics
23 (1976) 187-202. A class of adaptive resonance architectures has since been characterized as a system of ordinary differential equations by Carpenter and Grossberg, “Category learning and adaptive pattern recognition: A neural network model,
Proceeding of the Third Army Conference on Applied Mathematics and Computing
, ARO Report 86-1 (1985) 37-56, and “A massively parallel architecture for a self-organizing neural pattern recognition machine.”
Computer Vision, Graphics, and Image Processing
, 37 (1987) 54-1 15. One implementation of an ART system is presented in U.S. application Ser. No. PCT/US86/02553, filed Nov. 26, 1986 by Carpenter and Grossberg for “Pattern Recognition System”.
More recently, a novel neural network called the fuzzy ARTMAP that is capable of incremental approximation of nonlinear functions was proposed by Carpenter et al. in G. A. Carpenter and S. Grossberg, “A massively parallel architecture for a self-organizing neural pattern recognition machine,” Computer Vision Graphics, Image Process., Vol. 37, pp. 54-115, 1987. The number of nodes in this network is recruited in a dynamic and automatic fashion depending on the complexity of the function. Further, the network guarantees stable convergence and can learn new training data without the need for retraining on previously presented data. While the fuzzy ARTMAP and its variants have performed very well for classification problems, as well as extraction of rules from large databases, they do not perform very well for function approximation tasks in highly noisy conditions. This problem was addressed by Marriott and Harrison in S. Marriott and R. F. Harrison, “A modified fuzzy artmap architecture for the approximation of noisy mappings,” Neural Networks, Vol. 8, pp. 619-41, 1995, by designing a new variant of the fuzzy ARTMAP called the PROBART to handle incremental function approximation problems under noisy conditions. The PROBART retains all of the desirable properties of fuzzy ARTMAP but requires fewer nodes to approximate functions.
Another desirable property of the ANN is its ability to generalize to previously untrained data. While the PROBART network is capable of incremental function approximation under noisy conditions, it does not generalize very well to previously untrained data. The PROBART network has been modified by Srinivasa in N. Srinivasa, “Learning and Generalization of Noisy Mappings Using a Modified PROBART Neural Network,” IEEE Transactions on Signal Processing, Vol. 45, No. 10, October 1997, pp. 2533-2550, to achieve a reliable generalization capability. The modified PROBART (M-PROBART) considerably improved the prediction accuracy of the PROBART network on previously untrained data even for highly noisy function approximation tasks. Furthermore, the M-PROBART allows for a relatively small number of training samples to approximate a given mapping, thereby improving the learning speed.
The modified probability adaptive resonance theory (M-PROBART) neural network algorithm is a variant of the Fuzzy ARTMAP, and was developed to overcome the deficiency of incrementally approximating nonlinear functions under noisy conditions. The M-PROBART neural network is a variant of the adaptive resonance theory (ART) network concept, and consists of two clustering networks connected by an associative learning network. The basic M-PROBART structure is shown in FIG.
1
. For any given input-output data pair, the first clustering network
100
clusters the input features, shown in the figure as an input feature space
102
having N features, in the form of hyper-rectangles. The vertices of the hyper-rectangle are defined by the values of the input features and the dimensions of the hyper rectangle are equal to the number of input features. The size of the hyper-rectangle is defined based on the outlier members for each cluster. The corresponding output, shown in the figure as an output feature space
104
having M features, is also clustered by the second clustering network
106
in the form of a hyper-rectangle. An associative learning network
108
then correlates these clusters. The clustering networks
100
and
106
are represented by a series of nodes
110
. In the original Fuzzy ARTMAP network, only many-to-one functional mappings were allowed. This implies that many hyper-rectangles that form input clusters could be associated with a single hyper-rectangle on the output side but not the other way around. Further, for any given input, only one cluster (i.e., the maximally active or the best match cluster) was allowed to be active on the input side and a prediction was based on the associated output cluster for that maximally active input cluster. This mode of prediction is called the winner-take-all (WTA) mode of prediction. It has been shown that by replacing the WTA mode of prediction with a distributed mode of prediction combined with allowing one-to-many mappings between the input and output clusters, the M-PROBART was capable of better prediction capabilities than Fuzzy ARTMAP under noisy conditions.
The associative learning network in the M-PROBART has the simple function of counting the frequency of co-occurrence of an input and output cluster. Thus, if an input cluster is very frequently co-active with an output clust

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Classification method and apparatus based on boosting and... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Classification method and apparatus based on boosting and..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Classification method and apparatus based on boosting and... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2890837

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.