Use of a unified language model

Data processing: speech signal processing – linguistics – language – Linguistics – Natural language

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C704S257000, C704S275000

Reexamination Certificate

active

07016830

ABSTRACT:
A language processing system includes a unified language model. The unified language model comprises a plurality of context-free grammars having non-terminal tokens representing semantic or syntactic concepts and terminals, and an N-gram language model having non-terminal tokens. A language processing module capable of receiving an input signal indicative of language accesses the unified language model to recognize the language. The language processing module generates hypotheses for the received language as a function of words of the unified language model and/or provides an output signal indicative of the language and at least some of the semantic or syntactic concepts contained therein.

REFERENCES:
patent: 4831550 (1989-05-01), Katz
patent: 4945566 (1990-07-01), Mergel et al.
patent: 4947438 (1990-08-01), Paeseler
patent: 5263117 (1993-11-01), Nadas et al.
patent: 5384892 (1995-01-01), Strong
patent: 5477451 (1995-12-01), Brown et al.
patent: 5502774 (1996-03-01), Bellegarda et al.
patent: 5615296 (1997-03-01), Stanford et al.
patent: 5621809 (1997-04-01), Bellegarda et al.
patent: 5680511 (1997-10-01), Baker et al.
patent: 5689617 (1997-11-01), Pallakoff et al.
patent: 5710866 (1998-01-01), Alleva et al.
patent: 5752052 (1998-05-01), Richardson et al.
patent: 5765133 (1998-06-01), Antoniol et al.
patent: 5819220 (1998-10-01), Sarukkai et al.
patent: 5829000 (1998-10-01), Huang et al.
patent: 5835888 (1998-11-01), Kanevsky et al.
patent: 5899973 (1999-05-01), Bandara et al.
patent: 5905972 (1999-05-01), Huang et al.
patent: 5913193 (1999-06-01), Huang et al.
patent: 5937384 (1999-08-01), Huang et al.
patent: 5963903 (1999-10-01), Hon et al.
patent: 6073091 (2000-06-01), Kanevsky et al.
patent: 6081779 (2000-06-01), Besling et al.
patent: 6141641 (2000-10-01), Hwang et al.
patent: 6154722 (2000-11-01), Bellegarda
patent: 6157912 (2000-12-01), Kneser et al.
patent: 6167398 (2000-12-01), Wyard et al.
patent: 6182039 (2001-01-01), Rigazio et al.
patent: 6188976 (2001-02-01), Ramaswamy
patent: 6567778 (2003-05-01), Chao Chang et al.
patent: 0 645 757 (1995-03-01), None
patent: 0 687 987 (1995-12-01), None
patent: WO 96/41333 (1996-12-01), None
patent: WO 98/34180 (1998-06-01), None
Mergel, A. et al., “Construction of Language Models for Spoken Database Queries”, IEEE, 1987, pp 844-847.
Ward, W., “Understanding Spontaneous Speech: The Phoenix System”, Proceedings ICASSP, 1991, pp. 365-367.
Matsunaga et al., “Task Adaptation in Stochastic Language Models for Continuous Speech Recognition”, IEEE Mar. 23, 1992, pp. I-165-I-168.
Moore, R., et al., “Combining Linguistic and Statistical Knowledge Sources in Natural-Language Processing for ATIS”, in Proceedings of the ARPA Spoken Language Systems Technology Workshop, 1995, Morgan Kaufmann, Los Altos, CA; Austin, Texas.
PJ Wyard et al., “Spoken Language Systems-Beyond Prompt and Response”, BT Technology Journal, Jan. 1996, No. 1, pp. 187-205.
Huang, X., et al., “From Sphinx II to Whisper: Making Speech Recognition Usable, in Automatic Speech and Speaker Recognition”, C.H. Lee, F.K. Soong, and K.K. Paliwal, Editors, 1996, Klewer Academic Publishers: Norwell, MA., pp. 481-508.
“Implications of the Perplexity Definition”, Eagles Handbook of Standards and Resources for Spoken Language Systems, Online!, May 1997.
Kneser et al., “Semantic Clustering for Adaptive Language Modelling”, IEEE, 1997, pp. 779-782.
Masataki et al., “Task Adaptation Using Map Estimation in N-gram Language Modeling”, IEEE, 1997, pp. 783-786.
Niesler et al., “Modelling Word-Pair Relations in a Category-Based Language Model”, IEEE, 1997, pp. 795-798.
Bellegarda, J., “A Statistical Language Modeling Approach Integrating Local and Global Constraints”, IEEE, 1997, pp. 262-269.
Seneff, S., “The Use of Linguistic Hierarchies in Speech Understanding”, in ICSLP, 1998, Sydney, Australia.
Gillett, J. and W. Ward, “A Language Model Combining Trigrams and Stochastic Context-Free Grammars”, in ICSLP, 1998, Sydney, Australia.
Galescu, L., E.K. Ringger, and J. Allen, “Rapid Language Model Development for New Task Domains”, in Proceedings of the ELRA First International Conference on Language Resources and Evaluation (LREC), 1998, Granada, Spain.
Nasr, A., et al., “A Language Model Combining N-grams and Stochastic Finite State Automata”, in Eurospeech, 1999.
Wang, Y.-Y., “A Robust Parser for Spoken Language Understanding”, in Eurospeech, 1999, Hungary.
Wang, K., “An Event Driven Model for Dialogue Systems”, in ICSLP, 1998, Sydney Australia.
Mahajan, M., D. Beeferman, and X.D. Huang, “Improved Topic-Dependent Language Modeling Using Information Retrieval Techniques”, in ICASSP, 1999, Phoenix, AZ., USA.
Ward et al. “Flexible Use of Semantic Constraints in Speech Recognition,” Apr. 1993, 1993 IEEE ICASSP, vol. 2 pp. 49-50.
Souvignier et al. “The Thoughtful Elephant: Strategies for Spoken Dialog Systems,” Jan. 2000, IEEE Transactions on Speech and Audio Processing, vol. 8, No. 1, pp. 51-62.
Goodman, J.T., “Putting It All Together: Language Model Combination,” Acoustics, Speech, and Signal Processing, 2000. ICASSP '00 Inern'l Conf. On, v. 3, pp. 1647-1650.
Wang, Ye-Yi et al., “Unified Context-Free Grammar and N-Gram Model for Spoken Language Processing,” Acoustics, Speech, and Signal Processing, 2000 IEEE Intern'l Conf. On, v. 3, pp. 1639-1642.
Tsukada, H. et al., “Reliable Utterance Segment Recognition by Integrating a Grammar with Statistical Language Constraints,” Speech Communications, Elsevier Science Publishers, Dec. 1998, vol. 26, No. 4, pp. 299-309.
Moore, R., “Using Natural-Language Knowledge Sources in Speech Recognition,” Computational Models of Speech Pattern Processing, Proceedings of Computational Models of Speech Pattern Processing, Jul. 1997, pp. 304-327.
Tawezawa, T. et al., “Dialogue Speech Recognition Using Syntactic Rules Based on Subtrees and Preterminal Bigrams,” Systems & Computers in Japan, Scripta Technica Journals, May 1, 1997, vol. 28, No. 5, pp. 22-32.
Hwang, M. Y., et al., “Predicting unseen Triphones with Senones,” IEEE Transactions on Speech and Audio Processing, Nov. 6, 1996, pp. 412-419.
Kawabata, T., et al., “Back-Off Method for N-Gram Smoothing Based on Binomial Posterior Distribution,” Acoustics, Speech, and Signal Processing, 1996. ICASSP-96, v. 1, pp. 192-195.
Database Inspec ′Online!, Institute of Electrical Engineers, “Improvement of a Probabilistic CFG Using a Cluster-Based language Modeling Technique,” & “Methodologies for the Conception, Design, and Application of Intelligent Systems,” Abstract, 1996.
Huang, X, et al., “Microsoft Windows Highly Intelligent Speech Recognizer: Whisper,” 1195, IEEE, pp. 93-96.
Lloyd-Thomas, H., et al., “An Integrated Grammar/Bigram Language Model Using Path Scores,” Proceedings of the International Conference on Acousticcs, Speech and Signal Processing, May 9, 1995, vol. 1, pp. 173-176.
Meteer, M., et al., “Statistical Language Modeling Combining N-Gram and Context-Free Grammars,” Speech Processing, Mpls., Apr. 27-30, 1993, ICASSP, New York, IEEE, Apr. 27, 1993, vol. 2, pp. II-37-40.
Lippmann, E.A., et al., “Multi-Style Training for Robust Isolated-Word Speech Recognition,” Proceedings of DARPA Speech Recognition Workshop, Mar. 24-26, 1987, pp. 96-99.
Jelinek et al. “Putting Language into Language Modeling,” Proceedings of Eurospeech 1999, pp. 1-5.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Use of a unified language model does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Use of a unified language model, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Use of a unified language model will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3557395

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.