Speed up speech recognition search using macro evaluator

Data processing: speech signal processing – linguistics – language – Speech signal processing – Recognition

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C704S231000, C704S255000

Reexamination Certificate

active

06285981

ABSTRACT:

TECHNICAL FIELD OF THE INVENTION
This invention relates to speech recognition and more particularly to speed up recognition using macro evaluator.
BACKGROUND OF THE INVENTION
Speech recognition involves searching and comparing the input speech to speech models representing vocabulary to identify words and sentences as shown in FIG.
1
.
The search speed for large vocabulary speech recognition has been an active research area for the past few years. Even on the state-of-the-art workstation, search can take hundreds of times real time for a large vocabulary task (20K words). Most of the fast search algorithms involve multi-passes of search. Namely to use simple models (e.g. monophones) to do a quick rough search and output a much smaller N-best sub-space; then use detailed models (e.g. clustered triphones with mixtures) to search that sub-space and output the final results (see Fil Alleva et al. “An Improved Search Algorithm Using Incremental Knowledge for Continuous Speech Recognition,” ICASSP 1993, Vol 2, 307-310; Long Nguyen et al. “Search Algorithms for Software-Only Real-Time Recognition with Very Large Vocabulary,” ICASSP; and Hy Murveit et al. “Progressive-Search Algorithms for Large Vocabulary Speech Recognition,” ICASSP). The first pass of using monophones to reduce the search space will introduce error, therefore the reduced search space has to be large enough to contain the best path. This process requires a lot of experiments and fine-tuning.
The search process involves expanding a search tree according to the grammar and lexical constraints. The size of the search tree grows exponentially with the size of the vocabulary. Viterbi beam search is used to prune away improbable branches of the tree; however, the tree is still very large for large vocabulary tasks.
Multi-pass algorithm is often used to speed up the search. Simple models (e.g. monophones) are used to do a quick rough search and output a much smaller N-best subspace. Because there are very few models, the search can be done much faster. However, the accuracy of these simple models are not good enough, therefore a large enough N-best subs space has to be preserved for following stages of search with more detailed models.
Another process is to use lexical tree to maximize the sharing of evaluation. See Mosur Ravishankar “Efficient Algorithms for Speech Recognition,” Ph.D. thesis, CMU-CS-96-143, 1996. Also see Julian Odell “The Use of Context in Large Vocabulary Speech Recognition,” Ph.D. thesis, Queens' College, Cambridge University, 1995. For example, suppose both bake and baked are allowed in a certain grammar node, much of their evaluation can be shared because both words start with phone sequence: /b/ /ey/ /k/. If monophones are used in the first pass of search, no matter how large the vocabulary is, there are only about 50 English phones the search can start with. This principle is called lexical tree because the sharing of initial evaluation, and then the fanning out only when phones differ looks like a tree structure. The effect of lexical tree can be achieved by removing the word level of the grammar, and then canonicalize (remove redundancy) the phone network. For example:
% more simple.cfg
start (<S>).
<S>→bake | baked.
bake →b ey k.
baked →b ey k t.
% cfg_merge simple.cfg | rg_from_rgdag |\ rg_canonicalize
start(<S>).
<S>→b, Z

1.
Z

1→ey, Z

2.
Z

2→k, Z

3.
Z

3→t, Z

4.
Z

3→“ ”.
Z

4→“ ”.
The original grammar has two levels: sentence gramar in terms of words, and pronunciation grammar (lexicon) in terms of phones. After removing the word level and then canonicalizing the one level phone network, same initial will be automatically shared. The recognizer will output phone sequence as the recognition result, which can be parsed (text only) to get the word. Text parsing takes virtually no time compared to speech recognition parsing.
It is desirable to provide a method to speed up the search that does not introduce error and can be used independently of multi-pass search or lexical tree.
SUMMARY OF THE INVENTION
In accordance with one embodiment of the present invention, faster search time and less search space is provided by treating a whole HMM as an integral unit in the search network.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Speed up speech recognition search using macro evaluator does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Speed up speech recognition search using macro evaluator, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Speed up speech recognition search using macro evaluator will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2523395

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.