Nonlinear mapping for feature extraction in automatic speech...

Data processing: speech signal processing – linguistics – language – Speech signal processing – Recognition

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C704S232000, C704S240000

Reexamination Certificate

active

09714806

ABSTRACT:
The present invention successfully combines neural-net discriminative feature processing with Gaussian-mixture distribution modeling (GMM). By training one or more neural networks to generate subword probability posteriors, then using transformations of these estimates as the base features for a conventionally-trained Gaussian-mixture based system, substantial error rate reductions may be achieved. The present invention effectively has two acoustic models in tandem—first a neural net and then a GMM. By using a variety of combination schemes available for connectionist models, various systems based upon multiple features streams can be constructed with even greater error rate reductions.

REFERENCES:
patent: 5317673 (1994-05-01), Cohen et al.
patent: 5745649 (1998-04-01), Lubensky
N. Morgan and H. Bourlard, Continuous Speech Recognition, An introduction to hybrid HMM/connectionist approach, IEEE Signal Processing Magazine, 12(3): 25-42, May 1995.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Nonlinear mapping for feature extraction in automatic speech... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Nonlinear mapping for feature extraction in automatic speech..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Nonlinear mapping for feature extraction in automatic speech... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3880973

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.