Method and system for frame alignment and unsupervised...

Data processing: speech signal processing – linguistics – language – Speech signal processing – Recognition

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C704S243000

Reexamination Certificate

active

06917918

ABSTRACT:
An unsupervised adaptation method and apparatus are provided that reduce the storage and time requirements associated with adaptation. Under the invention, utterances are converted into feature vectors, which are decoded to produce a transcript and alignment unit boundaries for the utterance. Individual alignment units and the feature vectors associated with those alignment units are then provided to an alignment function, which aligns the feature vectors with the states of each alignment unit. Because the alignment is performed within alignment unit boundaries, fewer feature vectors are used and the time for alignment is reduced. After alignment, the feature vector dimensions aligned to a state are added to dimension sums that are kept for that state. After all the states in an utterance have had their sums updated, the speech signal and the alignment units are deleted. Once sufficient frames of data have been received to perform adaptive training, the acoustic model is adapted.

REFERENCES:
patent: 5651094 (1997-07-01), Takagi et al.
patent: 5819223 (1998-10-01), Takagi
patent: 5907825 (1999-05-01), Tzirkel-Hancock
patent: 5920837 (1999-07-01), Gould et al.
Lee, Chin-Hui, “Adaptive Compensation for Robust Speech Recognioint,” Proc. 1997 IEEE Workshop on Automatic Speech Recogniitoin and Understanding, 1997., Dec. 14-17, pp. 357-364.
“Tree-structured Models of Parameter Dependence for Rapid Adaptation in Large Vocabulary Conversational Speech Recognition”, by A. Kannan et al., 1999 IEEE International Conference on Acoustics, Speech, and Signal Processing, vol. 2, p. 769-72 (Mar. 1999).
“N-Best Based Supervised and Unsupervised Adaptation for Native and Non-Native Speakers in Cars”, by P. Nguyen et al., 1999 IEEE International Conferenece on Acoustics, Speech and Signal Processing, vol. 1, p. 173-6 (Mar. 1999).
“Unsupervised Adaptation Using Structural Bayes Approach”, by K. Shinoda et al., Proceedings of the 1998 IEEE International Conference on Acoustics, Speech and Signal Processing, vol. 2, p. 793-6 (May 1998).
“Long Term On-line Speaker Adaptation for Large Vocabulary Dictation”, by E. Thelen, Proceedings ICSLP 96. Fourth International Conference on Spoken Language Processing, vol. 4, p. 2139-42 (Oct 1996).
“Rapid Unsupervised Adaptation to Children's Speech on A Connected-digit Task”, by D.C. Burnett et al., Fourth International Conference on Spoken Language Processing, vol. 2, p. 1145-8 (Oct. 1996).
“Iterative Unsupervised Adaptation Using Maximum Likelihood Linear Regression”, by P.C. Woodland et al., Proceedings ICSLP 96. Fourth International Conference on Spoken Language Processing, vol. 2, p. 1133-6 (Oct. 1996).
“An Experimental Study of Acoustic Adaptation Algorithms”, by A. Sankar et al., 1996 IEEE International Conference on Acoustics Speech, and Signal Processing Conference Proceedings, vol. 2, p. 713-16 (May. 1996).
“Unsupervised Adaptation to New Speakers in Feature-Based Letter Recognition”, by M.J. Lasry, ICASSP 84. Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, vol. 2, p. 17.6/1-4 (Mar. 1984).

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Method and system for frame alignment and unsupervised... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Method and system for frame alignment and unsupervised..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and system for frame alignment and unsupervised... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3370466

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.