Data processing: speech signal processing – linguistics – language – Speech signal processing – For storage or transmission
Reexamination Certificate
1999-02-08
2002-06-25
Dorvil, Richemond (Department: 2741)
Data processing: speech signal processing, linguistics, language
Speech signal processing
For storage or transmission
C704S231000, C704S243000, C704S201000, C379S324000
Reexamination Certificate
active
06411926
ABSTRACT:
BACKGROUND OF THE INVENTION
I. Field of the Invention
The present invention pertains generally to the field of communications, and more specifically to voice recognition systems.
II. Background
Voice recognition (VR) represents one of the most important techniques to endow a machine with simulated intelligence to recognize user or user voiced commands and to facilitate human interface with the machine. VR also represents a key technique for human speech understanding. Systems that employ techniques to recover a linguistic message from an acoustic speech signal are called voice recognizers. A voice recognizer typically comprises an acoustic processor, which extracts a sequence of information-bearing features, or vectors, necessary to achieve VR of the incoming raw speech, and a word decoder, which decodes the sequence of features, or vectors, to yield a meaningful and desired output format such as a sequence of linguistic words corresponding to the input utterance. To increase the performance of a given system, training is required to equip the system with valid parameters. In other words, the system needs to learn before it can function optimally.
The acoustic processor represents a front-end speech analysis subsystem in a voice recognizer. In response to an input speech signal, the acoustic processor provides an appropriate representation to characterize the time varying speech signal. The acoustic processor should discard irrelevant information such as background noise, channel distortion, speaker characteristics, and manner of speaking. Efficient acoustic processing furnishes voice recognizers with enhanced acoustic discrimination power. To this end, a useful characteristic to be analyzed is the short time spectral envelope. Two commonly used spectral analysis techniques for characterizing the short time spectral envelope are linear predictive coding (LPC) and filter-bank-based spectral modeling. Exemplary LPC techniques are described in U.S. Pat. No. 5,414,796, which is assigned to the assignee of the present invention and fully incorporated herein by reference, and L. B. Rabiner & R. W. Schafer,
Digital Processing of Speech Signals
396-453 (1978), which is also fully incorporated herein by reference.
The use of VR (also commonly referred to as speech recognition) is becoming increasingly important for safety reasons. For example, VR may be used to replace the manual task of pushing buttons on a wireless telephone keypad. This is especially important when a user is initiating a telephone call while driving a car. When using a phone without VR, the driver must remove one hand from the steering wheel and look at the phone keypad while pushing the buttons to dial the call. These acts increase the likelihood of a car accident. A speech-enabled phone (i.e., a phone designed for speech recognition) would allow the driver to place telephone calls while continuously watching the road. And a hands-free car-kit system would additionally permit the driver to maintain both hands on the steering wheel during call initiation.
Speech recognition devices are classified as either speaker-dependent or speaker-independent devices. Speaker-independent devices are capable of accepting voice commands from any user. Speaker-dependent devices, which are more common, are trained to recognize commands from particular users. A speaker-dependent VR device typically operates in two phases, a training phase and a recognition phase. In the training phase, the VR system prompts the user to speak each of the words in the system's vocabulary once or twice so the system can learn the characteristics of the user's speech for these particular words or phrases. Alternatively, for a phonetic VR device, training is accomplished by reading one or more brief articles specifically scripted to cover all of the phonemes in the language. An exemplary vocabulary for a hands-free car kit might include the digits on the keypad; the keywords “call,” “send,” “dial,” “cancel,” “clear,” “add,” “delete,” “history,” “program,” “yes,” and “no”; and the names of a predefined number of commonly called coworkers, friends, or family members. Once training is complete, the user can initiate calls in the recognition phase by speaking the trained keywords. For example, if the name “John” were one of the trained names, the user could initiate a call to John by saying the phrase “Call John.” The VR system would recognize the words “Call” and “John,” and would dial the number that the user had previously entered as John's telephone number.
Conventional VR devices typically use a digital signal processor (DSP) or a microprocessor to analyze the incoming speech samples, extract the relevant parameters, decode the parameters, and compare the decoded parameters with a stored set of words, or VR templates, which comprises the vocabulary of the VR device. The vocabulary is stored in a nonvolatile memory such as, e.g., a flash memory. In a conventional VR system with both a DSP and a microprocessor, such as, e.g., a digital cellular telephone, the nonvolatile memory generally accessible by the microprocessor, but not by the DSP. In such a system, if the VR is performed entirely the microprocessor, the microprocessor is usually short of computation power to deliver the recognition results with a reasonable latency. On the other hand, if the VR is performed entirely in the DSP, the microprocessor needs to read the flash memory and pass the contents of the read to the DSP, as the DSP has a relatively small size of on-chip memory not sufficient to hold the large VR templates. This is a lengthy process because the typical low bandwidth of the interface between the DSP and the microprocessor limits the amount of data that can be transferred between the two devices in a given amount of time. Thus, there is a need for a VR device that efficiently combines the computational power of a DSP with the memory capacity of a microprocessor.
SUMMARY OF THE INVENTION
The present invention is directed to a VR device that efficiently combines the computational power of a DSP with the memory capacity of a microprocessor. Accordingly, in one aspect of the invention, a distributed voice recognition system advantageously includes a digital signal processor configured to receive digitized speech samples and extract therefrom a plurality of parameters; a storage medium containing a plurality of speech templates; and a processor coupled to the storage medium and to the digital signal processor, the processor being configured to receive the plurality of parameters from the digital signal processor and compare the plurality of parameters with the plurality of speech templates.
In another aspect of the invention, a method of distributing voice recognition processing advantageously includes the steps of in a digital signal processor, extracting a plurality of parameters from a plurality of digitized speech samples; providing the plurality of parameters to a microprocessor; and in the microprocessor, comparing the plurality of parameters to a plurality of speech templates.
In another aspect of the invention, a distributed voice recognition system advantageously includes means for extracting a plurality of parameters from a plurality of digitized speech samples; means for permanently storing a plurality of speech templates; and means for receiving the plurality of parameters from the means for extracting and comparing the plurality of parameters with the plurality of speech templates.
REFERENCES:
patent: 4567606 (1986-01-01), Vensko et al.
patent: 4731811 (1988-03-01), Dubus
patent: 4903301 (1990-02-01), Kondo et al.
patent: 4961229 (1990-10-01), Takahashi
patent: 4991217 (1991-02-01), Garrett et al.
patent: 5012518 (1991-04-01), Liu et al.
patent: 5040212 (1991-08-01), Bethards
patent: 5054082 (1991-10-01), Smith et al.
patent: 5109509 (1992-04-01), Katayama et al.
patent: 5146538 (1992-09-01), Sobti et al.
patent: 5231670 (1993-07-01), Goldhor et al.
patent: 5280585 (1994-01-01), Kochis et al.
patent: 5321840 (1994-06-01), Ahlin et al.
patent: 5325524 (19
Baker Kent D.
Chawan Vijay
Dorvil Richemond
QUALCOMM Incorporated
Rouse Thomas R.
LandOfFree
Distributed voice recognition system does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Distributed voice recognition system, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Distributed voice recognition system will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2932365