Spoken language identification system and methods for...

Data processing: speech signal processing – linguistics – language – Speech signal processing – Recognition

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C704S231000, C704S236000, C704S254000

Reexamination Certificate

active

07917361

ABSTRACT:
A method for training a spoken language identification system to identify an unknown language as one of a plurality of known candidate languages includes the process of creating a sound inventory comprising a plurality of sound tokens, the collective plurality of sound tokens provided from a subset of the known candidate languages. The method further includes providing a plurality of training samples, each training sample composed within one of the known candidate languages. Further included is the process of generating one or more training vectors from each training database, wherein each training vector is defined as a function of said plurality of sound tokens provided from said subset of the known candidate languages. The method further includes associating each training vector with the candidate language of the corresponding training sample.

REFERENCES:
patent: 5689616 (1997-11-01), Li
patent: 5805771 (1998-09-01), Muthusamy et al.
patent: 6029124 (2000-02-01), Gillick et al.
patent: 6085160 (2000-07-01), D'hoore et al.
patent: 6675143 (2004-01-01), Barnes et al.
patent: 7319958 (2008-01-01), Melnar et al.
patent: 7689404 (2010-03-01), Khasin
patent: 2002/0128827 (2002-09-01), Bu et al.
patent: 2003/0233233 (2003-12-01), Hong
patent: 2004/0158466 (2004-08-01), Miranda
patent: 508564 (2002-11-01), None
Navratil, J.; Zuhlke, W.; “An efficient phonotactic-acoustic system for language identification,” Acoustics, Speech and Signal Processing, 1998. Proceedings of the 1998 IEEE International Conference on, May 12-15, 1998, pp. 781-784 vol. 2.
K. M. Berkling et al., “Language Identification of Six Languages Based on a Common Set of Broad Phonemes,” Proc. International Conference on Spoken Language Processing, vol. 4, pp. 1891-1894, 1994.
C. Corredor-Ardoy et al., “Language Identification with Language-Independent Acoustic Models,” Proc. Eurospeech, vol. 1, pp. 55-58,1997.
H. Li et al., “A Phonotactic Language Modelfor Spoken Language Identification,” Proc. 43rdAnnual Meeting of the Association for Computational Linguistics, pp. 515-522, Jun. 2005.
B. Ma et al., “Spoken Language Identification Using Bag-of-Sounds,” Proe. International Conference on Chinese Computing, Mar. 2005.
M. A. Zissman et al., “Automatic Language Identification” Speech Communications, vol. 35, No. 1-2, pp. '115-124,2001.
Y. K. Muthusamy et al., “Reviewing Automatic Language Identification,” IEEE Signal Processing Magazine, vol. 11, No. 4, pp. 33-41, Oct. 1994.
Kwan H. et al.,Recognized Phoneme-BasedN-GramModeling in Automatic Language Identification, 4thEuropean Conference on Speech Communication and Technology, (Eurospeech), Madrid:Graficas Brens, ES, vol. 2 Conf. 4, Sep. 18, 1995, pp. 1367-1370.
M. A. Zissman, “Comparsion of Four Approaches to Automatic Language Identification of Telephone Speech,” vol. 4, No. 1, pp. 31 and 36, Jan. 1996.
“Spoken Language Processing; A Guide to Theory, Algorithm, and System Development”, Huang, et al., Prentice Hall PTR, 2001, pp. 552 and 553.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Spoken language identification system and methods for... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Spoken language identification system and methods for..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Spoken language identification system and methods for... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2723531

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.