Method for building a natural language understanding model...

Data processing: speech signal processing – linguistics – language – Speech signal processing – Recognition

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C704S009000, C704S270000, C704S246000, C704S254000, C709S228000

Reexamination Certificate

active

07620550

ABSTRACT:
A method of generating a natural language model for use in a spoken dialog system is disclosed. The method comprises using sample utterances and creating a number of hand crafted rules for each call-type defined in a labeling guide. A first NLU model is generated and tested using the hand crafted rules and sample utterances. A second NLU model is built using the sample utterances as new training data and using the hand crafted rules. The second NLU model is tested for performance using a first batch of labeled data. A series of NLU models are built by adding a previous batch of labeled data to training data and using a new batch of labeling data as test data to generate the series of NLU models with training data that increases constantly. If not all the labeling data is received, the method comprises repeating the step of building a series of NLU models until all labeling data is received. After all the training data is received, at least once, the method comprises building a third NLU model using all the labeling data, wherein the third NLU model is used in generating the spoken dialog service.

REFERENCES:
patent: 5685000 (1997-11-01), Cox, Jr.
patent: 6219643 (2001-04-01), Cohen et al.
patent: 6324513 (2001-11-01), Nagai et al.
patent: 6356869 (2002-03-01), Chapados et al.
patent: 6397179 (2002-05-01), Crespo et al.
patent: 6434524 (2002-08-01), Weber
patent: 6510411 (2003-01-01), Norton et al.
patent: 6529871 (2003-03-01), Kanevsky et al.
patent: 6735560 (2004-05-01), Epstein
patent: 6754626 (2004-06-01), Epstein
patent: 6785651 (2004-08-01), Wang
patent: 6950793 (2005-09-01), Ross et al.
patent: 7003463 (2006-02-01), Maes et al.
patent: 7143035 (2006-11-01), Dharanipragada et al.
patent: 2002/0184373 (2002-12-01), Maes
patent: 2003/0216905 (2003-11-01), Chelba et al.
patent: 2006/0161436 (2006-07-01), Liedtke et al.
M. El-Beze, C. de Loupy and P. -F. Marteau, “Using Semantic Classification Trees For WSD”, Laboratorie d'Informatique d'Avignon (LIA), BP 1228, F-84911 Avignon Cedex 9 (France), Bertin Technologies, ZI des Gatines-B.P.3, F-78373 Plaisir cedex.
Stephen Patel, Joseph Smarr, “Automatic Classification of Previously Unseen Proper Noun Phrases into Semantic Categories Using and N-Gram Letter Model”, CS 224N Final Project, Stanford University, Spring 2001.
Jian-Yun Nie, Mingwen Wang, “A Latent Semantic Structure Model for Text Classification”, DIRO, Universite'de Montreal, Quebec, H3C3J7 Canada, School of Computer Science and Technology, Jiangxi Normal University, 330027, Nanchang, Jiangxi, China.
Egbert Ammicht, Allen Gorin, Tirso Alonso, “Knowledge Collection for Natural Language Spoken Dialog Systems”, Eurospeech, 1999.
Paul C. Constantinides, Alexander Rudnicky, “Dialog Analysis in the Carnegie Mellon Communicator”, Eurospeech 1999.
Jongho Shin, Shrikanth Narayanan, Laurie Gerber, Abe Kazemzadeh, Dani Byrd, “Analysis of User Behavior Under Error Conditions in Spoken Dialogs”, printed Dec. 2003.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Method for building a natural language understanding model... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Method for building a natural language understanding model..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method for building a natural language understanding model... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-4071431

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.