Search optimization system and method for continuous speech...

Data processing: speech signal processing – linguistics – language – Speech signal processing – Recognition

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C704S257000

Reexamination Certificate

active

06397179

ABSTRACT:

FIELD OF THE INVENTION
This invention relates to a system and method for optimization of searching for continuous speech recognition.
BACKGROUND OF THE INVENTION
Speech recognition for applications such as automated directory enquiry assistance and control of operation based on speech input requires a real time response. Spoken input must be recognized within about half a second of the end of the spoken input to simulate the response of a human operator and avoid a perception of unnatural delay.
Processing of speech input falls into five main steps: audio channel adaptation, feature extraction, word end point detection, speech recognition, and accept/reject decision logic. Pattern recognition generally, and more particularly recognition of patterns in continuous signals such as speech signals, requires complex calculations and is dependent on providing sufficient processing power to meet the computational load. Thus the speech recognition step is the most computationally intensive step of the process
The computational load is dependent on the number of words or other elements of speech, which are modeled and held in a dictionary, for comparison to the spoken input (i.e. the size of vocabulary of the system); the complexity of the models in the dictionary; how the speech input is processed into a representation ready for comparison to the models; and the algorithm used for carrying out the comparison process. Numerous attempts have been made to improve the trade off between computational load, accuracy of recognition and speed of recognition.
Examples are described, e.g., in U.S. Pat. No. 5, 390,278 to Gupta et al., and U.S. Pat. No. 5,515,475 to Gupta et al. Many other background references are included in the above referenced copending applications.
In order to provide speech recognition which works efficiently in real time, two approaches are generally considered. The first is to make use of specialized hardware or parallel processing architectures. The second is to develop optimized search methods based on search algorithms that yield reasonable accuracy, but at a fraction of the cost of more optimal architectures. The latter approach is favored by many researchers, since it tackles the problem at the source, see for example, Schwartz, R., Nguyen, L., Makhoul, J., “Multiple-pass search strategies”, in Automatic Speech and Speaker Recognition, Lee, C. H., Soong, F. K., Paliwal, K. K. (eds.), Kluwer Academic Publishers (1996), pp 429-456. This approach is appealing since the hardware and algorithmic optimizations are often orthogonal, so the latter can always be built on top of the former.
The basic components of a spoken language processing (SLP) system include a continuous speech recognizer (CSR) for receiving spoken input from the user and a Natural Language Understanding component (NLU), represented schematically in
FIG. 1. A
conventional system operates as follows. Speech input is received by the CSR, and a search is performed by the CSR using acoustic models that model speech sounds, and a language model or ‘grammar’ that describes how words may be connected together. The acoustic model is typically in the form of Hidden Markov Models (HMM) describing the acoustic space. The language knowledge is usually used for both the CSR component and the NLU component, as shown in
FIG. 1
, with information on grammar and/or statistical models being used by the CSR, and semantic information being used by the NLU. The structure of the language is often used to constrain the search space of the recognizer. If the goal is to recognize unconstrained speech, the language knowledge usually takes the form of a statistical language model (bigram or trigram). If the goal is to recognize a specific constrained vocabulary, then the language knowledge takes the form of a regular grammar.
The search passes the recognized word strings representing several likely choices, in the form of a graph, to the natural language understanding component for extracting meaning from the recognized word strings. The language model provides knowledge to the NLU relating to understanding of the recognized word strings. More particularly the semantic information from the language knowledge is fed exclusively to the NLU component with information on how to construct a meaning representation of the CSR's output. This involves, among other things, identifying which words are important to the meaning and which are not. The latter are referred to as non-keywords or semantically-null words. Thus semantically-meaningful words and semantically-null words are identified to provide understanding of the input, and in the process, the word strings are converted to a standard logical form. The logical form is passed to a discourse manager DM, which is the interface between the user and the application. The DM gathers the necessary information from the user to request the applications to perform the user's goal by prompting the user for input.
While the terms ‘grammar’ and ‘language model’ are often used interchangeably, in this application, a language model is defined as the graph that is used by the CSR search algorithm to perform recognition. A grammar is a set of rules, which may also be represented as a graph, used by the NLU component to extract meaning from the recognized speech. There may be a one to one mapping between the language model and the grammar in the case where the language model is a constrained model. Connected Word Recognition (CWR) is an example of the latter. Nevertheless, known spoken language systems described above separate language knowledge into grammar and semantic information, and feed the former to the CSR and feed the latter to the NLU.
Most search optimization techniques involve reducing computation by making use of local scores during the decoding of a speech utterance. Copending U.S. application Ser. No. 09/118,621 entitled “Block algorithm for pattern recognition”, referenced above describes in detail an example of a search algorithm and scoring method.
For example, the Viterbi beam search, without a doubt the most widely used optimization, prunes the paths whose scores (likelihoods) are outside a beam determined by the best local score. Some neural-network based approaches threshold the posterior probabilities of each state to determine if it should remain active (Bourlard, H. Morgan, N., “Connectionist Speech Recognition-A Hybrid Approach”, Kluwer Academic Press, 1994.)
Another important technique that helped reduce the computation burden was the use of lexical trees instead of dedicated acoustic networks as described by Ney, H., Aubert, X., “Dynamic Programming Search Strategies: From Digit Strings to Large Vocabulary Word Graphs”, in Automatic Speech and Speaker Recognition, Lee, C. H., Soong, F. K., Paliwal, K. K. (eds.), Kluwer Academic Publishers (1996), pp 385-411. Along with that idea came language model look-ahead techniques to enhance the pruning described by Murveit, H., Monaco, P., Digalakis, V., Butzberger, J., “Techniques to Achieve an Accurate Real-Time Large-Vocabulary Speech Recognition System”, in ARPA Workshop on Human Language Technology, pp 368-373.
While these techniques are undisputedly effective at solving these specific problems, in all cases, the sole sources of “language knowledge” used to reduce the search space are the language model and the grammar layout; semantic information is not used by the CSR.
Word spotting techniques are an attempt to indirectly use semantic information by focusing the recognizer on the list of keywords(or key phrases) that are semantically meaningful. Some word spotting techniques use background models of speech in an attempt to capture every word that is not in the word spotters dictionary, including semantically null words (non-keywords) (Rohlicek, J. R., Russel, W., Roukos, S., Gish, H., “Word Spotting”, ICASSP 1989, pp 627-630).
While word spotting is generic, it is very costly and provides poor accuracy, especially when there is prior knowledge of which non-keywords are likely to be used. Because these latter models

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Search optimization system and method for continuous speech... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Search optimization system and method for continuous speech..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Search optimization system and method for continuous speech... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2847086

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.