Data processing: artificial intelligence – Neural network – Learning task
Reexamination Certificate
2011-03-01
2011-03-01
Sparks, Donald (Department: 2129)
Data processing: artificial intelligence
Neural network
Learning task
C706S025000
Reexamination Certificate
active
07899766
ABSTRACT:
Given a set of training examples—with known inputs and outputs—and a set of working examples—with known inputs but unknown outputs—train a classifier on the training examples. For each possible assignment of outputs to the working examples, determine whether assigning the outputs to the working examples results in a training and working set that are likely to have resulted from the same distribution. If so, then add the assignment to a likely set of assignments. For each assignment in the likely set, compute the error of the trained classifier on the assignment. Use the maximum of these errors as a probably approximately correct error bound for the classifier.
REFERENCES:
Holden, Sean B, PAC-like upper bounds for the sample complexity of leave-one-out cross-validation, Annual Workshop on Computational Learning Theory: Proceedings of the ninth annual conference on Computational learning theory; Jun. 28-Jul. 1, 1996. pp. 41-50, 1996.
Audibert, J.-Y. (2004). PAC-Bayesian statistical learning theory. PhD thesis, Laboratoire de Probabilities et Modeles Aleatoires, Universites Paris 6 and Paris 7; 2004. http://cermis.enpc.fr/˜audibert/ThesePack.zip.
Bax, E. (1999). Partition-based and sharp uniform error bounds. IEEE Transactions on Neural Networks vol. 10, No. 6, 1315-1320. USA.
Blum, A. and Langford, J. (2003). PAC-MDL bounds. Proceedings of the 16th Annual Conference on Computational Learning Theory (COLT), 344-357. USA.
Catoni, O. (2003). A PAC-Bayesian approach to adaptive classification. Preprint n.840, Laboratoire de Probabilities et Modeles Aleatoires, Universites Paris 6 and Paris 7. 2003. France.
Catoni, O. (2004). Improved Vapnik-Chervonenkis bounds. Preprint n. 942, Laboratoire de Probabilities et Modeles Aleatoires, Universites Paris 6 and Paris 7. 2004. France.
Cristianini, N. and Shawe-Taylor, J. (2000). An Introduction to Support Vector Machines and Other Kernel-Based Learning Methods. Cambridge University Press. USA.
Derbeko, P., El-Yaniv, R., and Meir, R. (2003). Error bounds for transductive learning via compression and clustering. Advances in Neural Information Processing Systems (NIPS) 16 MIT Press Cambridge MA.
Devroye, L., Gyorfi, L., and Lugosi, G. (1996). A Probabilistic Theory of Pattern Recognition. Springer-Verlag, New York.
El-Yaniv, R. and Gerzon, L. (2005). Effective transductive learning via objective model selection. Pattern Recognition Letters, 26(13):2104-2115. USA.
Joachims, T. (2002). Learning to Classify Text using Support Vector Machines. Kluwer Academic Publishers. USA.
Littlestone, N. and Warmuth, M. (1986). Relating data compression and learnability. Unpublished manuscript, University of California Santa Cruz.
Vapnik, V. and Chervonenkis, A. (1971) On the uniform convergence of relative frequencies of events to their probabilities. Theory of Probability and its Applications 16 264-280 USA.
Vapnik, V. (1998). Statistical Learning Theory. John Wiley & Sons. USA.
Bax Eric Theodore
Callejas Augusto Daniel
Olude-Afolabi Ola
Sparks Donald
LandOfFree
Bounding error rate of a classifier based on worst likely... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Bounding error rate of a classifier based on worst likely..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Bounding error rate of a classifier based on worst likely... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2747498