Speech processing using an expanded left to right parser

Data processing: speech signal processing – linguistics – language – Speech signal processing – Recognition

Patent

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

704256, G10L 506

Patent

active

060583657

ABSTRACT:
Continuous speech is recognized by selecting among hypotheses, consisting of candidates of symbol strings obtained by connecting phonemes corresponding to a Hidden Markov Model (HMM) having the highest probability, by referring to a phoneme context dependent type HMM from input speech using a HMM phoneme verification portion. A phoneme context dependent type LR (Left-Right) parser portion predicts a subsequent phoneme by referring to an action specifying item stored in an LR (Left to Right) parsing table to predict a phoneme context around the predicted phoneme using an action specifying item of the LR table.

REFERENCES:
patent: 4931928 (1990-06-01), Greenfeld
patent: 4984178 (1991-01-01), Hemphill et al.
patent: 5033087 (1991-07-01), Bahl et al.
patent: 5054074 (1991-10-01), Bakis
patent: 5086472 (1992-02-01), Yoshida
patent: 5105353 (1992-04-01), Charles et al.
L.R. Bahl et al., "Decision Trees For Phonological Rules in Continuous Speech", ICASSP '91 (Toronto, Canada) May 14-17, 1991, pp. 185-188.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Speech processing using an expanded left to right parser does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Speech processing using an expanded left to right parser, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Speech processing using an expanded left to right parser will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-1601735

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.