Data processing: artificial intelligence – Neural network – Learning task
Reexamination Certificate
2011-04-12
2011-04-12
Vincent, David R (Department: 2129)
Data processing: artificial intelligence
Neural network
Learning task
Reexamination Certificate
active
07925602
ABSTRACT:
Described is a technology by which a maximum entropy model used for classification is trained with a significantly lesser amount of training data than is normally used in training other maximum entropy models, yet provides similar accuracy to the others. The maximum entropy model is initially parameterized with parameter values determined from weights obtained by training a vector space model or an n-gram model. The weights may be scaled into the initial parameter values by determining a scaling factor. Gaussian mean values may also be determined, and used for regularization in training the maximum entropy model. Scaling may also be applied to the Gaussian mean values. After initial parameterization, training comprises using training data to iteratively adjust the initial parameters into adjusted parameters until convergence is determined.
REFERENCES:
patent: 5640487 (1997-06-01), Lau et al.
patent: 6415248 (2002-07-01), Bangalore et al.
patent: 7107207 (2006-09-01), Goodman
patent: 7107266 (2006-09-01), Breyman et al.
patent: 7769759 (2010-08-01), Gartung et al.
patent: 2001/0003174 (2001-06-01), Peters
patent: 2003/0125942 (2003-07-01), Peters
patent: 2005/0149462 (2005-07-01), Lee et al.
patent: 2005/0228778 (2005-10-01), Perrone
patent: 2006/0020448 (2006-01-01), Chelba et al.
patent: 2006/0136589 (2006-06-01), Konig et al.
patent: 2006/0212288 (2006-09-01), Sethy et al.
patent: 2006/0277173 (2006-12-01), Li et al.
patent: 2007/0022069 (2007-01-01), Goodman
patent: 2007/0100624 (2007-05-01), Weng et al.
patent: 2008/0097936 (2008-04-01), Schmidtler et al.
patent: 2008/0183649 (2008-07-01), Farahani et al.
Khudanpur et al., A Maximum Entropy Language Model Integrating N-Grams and Topic Dependencies for Conversational SPeech Recognition, 1999, IEEE, pp. 553-556.
Ye-Ye Wang et al., “Maximum Entropy Model Parameterization With TF*IDF Weighted Vector Space Model”, In Proceedings of the IEEE Workshop on Automatic Speech Recognition & Understanding, Kyoto, Japan, Dec. 9-13, 2007, pp. 213-218.
Dawn J. Lawrie, “Language Models for Hierarchical Summarization”, Date: Sep. 2003.
Kruengkrai, et al., “Document Clustering using Linear Partitioning Hyperplanes and Reallocation”, National Institute of Information and Communications Technology, Thailand, Date: 2004.
Roark, et al., “Discriminative n-gram Language Modeling”, Oregon Health & Science University, Beaverton, USA.
Xu, et al., “Training Connectionist Models for the Structured Language Model”, Johns Hopkins University, Baltimore, MD.
Acero Alejandro
Wang Ye-Yi
Microsoft Corporation
Vincent David R
LandOfFree
Maximum entropy model classfier that uses gaussian mean values does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Maximum entropy model classfier that uses gaussian mean values, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Maximum entropy model classfier that uses gaussian mean values will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2726637