System and method for lossy compression of voice recognition...

Data processing: speech signal processing – linguistics – language – Speech signal processing – Recognition

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C704S241000, C704S246000

Reexamination Certificate

active

06681207

ABSTRACT:

BACKGROUND
1. Field
The present invention pertains generally to the field of communications and more specifically to a system and method for improving storage of templates in a voice recognition system.
2. Background
Voice recognition (VR) represents one of the most important techniques to endow a machine with simulated intelligence to recognize user or user-voiced commands and to facilitate human interface with the machine. VR also represents a key technique for human speech understanding. Systems that employ techniques to recover a linguistic message from an acoustic speech signal are called voice recognizers. The term “voice recognizer” is used herein to mean generally any spoken-user-interface-enabled device.
The use of VR (also commonly referred to as speech recognition) is becoming increasingly important for safety reasons. For example, VR may be used to replace the manual task of pushing buttons on a wireless telephone keypad. This is especially important when a user is initiating a telephone call while driving a car. When using a phone without VR, the driver must remove one hand from the steering wheel and look at the phone keypad while pushing the buttons to dial the call. These acts increase the likelihood of a car accident. A speech-enabled phone (i.e., a phone designed for speech recognition) would allow the driver to place telephone calls while continuously watching the road. In addition, a hands-free car-kit system would permit the driver to maintain both hands on the steering wheel during call initiation.
Speech recognition devices are classified as either speaker-dependent (SD) or speaker-independent (SI) devices. Speaker-dependent devices, which are more common, are trained to recognize commands from particular users. In contrast, speaker-independent devices are capable of accepting voice commands from any user. To increase the performance of a given VR system, whether speaker-dependent or speaker-independent, training is required to equip the system with valid parameters. In other words, the system needs to learn before it can function optimally.
An exemplary vocabulary for a hands-free car kit might include the digits on the keypad; the keywords “call,” “send,” “dial,” “cancel,” “clear,” “add,” “delete,” “history,” “program,” “yes,” and “no”; and the names of a predefined number of commonly called coworkers, friends, or family members. Once training is complete, the user can initiate calls by speaking the trained keywords, which the VR device recognizes by comparing the spoken utterances with the previously trained utterances (stored as templates) and taking the best match. For example, if the name “John” were one of the trained names, the user could initiate a call to John by saying the phrase “Call John.” The VR system would recognize the words “Call” and “John,” and would dial the number that the user had previously entered as John's telephone number. Garbage templates are used to represent all words not in the vocabulary.
Combining multiple VR engines provides enhanced accuracy and uses a greater amount of information in the input speech signal. A system and method for combining VR engines is described in U.S. patent application Ser. No. 09/618,177 (hereinafter '177 application) entitled “COMBINED ENGINE SYSTEM AND METHOD FOR VOICE RECOGNITION”, filed Jul. 18, 2000, and U.S. patent application Ser. No. 09/657,760 (hereinafter '760 application) entitled “SYSTEM AND METHOD FOR AUTOMATIC VOICE RECOGNITION USING MAPPING,” filed Sep. 8, 2000, which are assigned to the assignee of the present invention and fully incorporated herein by reference.
Although a VR system that combines VR engines is more accurate than a VR system that uses a singular VR engine, each VR engine of the combined VR system may include inaccuracies because of a noisy environment. An input speech signal may not be recognized because of background noise. Background noise may result in no match between an input speech signal and a template from the VR system's vocabulary or may cause a mismatch between an input speech signal and a template from the VR system's vocabulary. When there is no match between the input speech signal and a template, the input speech signal is rejected. A mismatch results when a template that does not correspond to the input speech signal is chosen by the VR system. The mismatch condition is also known as substitution because an incorrect template is substituted for a correct template.
An embodiment that improves VR accuracy in the case of background noise is desired. An example of background noise that can cause a rejection or a mismatch is when a cell phone is used for voice dialing while driving and the input speech signal received at the microphone is corrupted by additive road noise. The additive road noise may degrade voice recognition and accuracy and cause a rejection or a mismatch.
Another example of noise that can cause a rejection or a mismatch is when the speech signal received at a microphone placed on the visor or a headset is subjected to convolutional distortion. Noise caused by convolutional distortion is known as convolutional noise and frequency mismatch. Convolutional distortion is dependent on many factors, such as distance between the mouth and microphone, frequency response of the microphone, acoustic properties of the interior of the automobile, etc. Such conditions may degrade voice recognition accuracy.
Traditionally, prior VR systems have included a RASTA filter to filter convolutional noise. However, background noise was not filtered by the RASTA filter. Such a filter is described in U.S. Pat. No. 5,450,522. Thus, there is a need for a technique to filter both convolutional noise and background noise. Such a technique would improve the accuracy of a VR system.
In a VR system, whether it is a speaker-dependent or speaker-independent VR system, the number of templates that can be stored in a memory of both of these types of VR systems is limited by the size of the memory. The limited size of the memory limits the robustness of the VR system because of the limited number of templates that can be stored. A system and method that increases the number of templates that can be stored in the memory of these VR systems is desired.
SUMMARY
The described embodiments are directed to a system and method for improving storage of templates in a voice recognition system. In one aspect, a system and method for voice recognition includes recording a plurality of utterances, extracting features of the plurality of utterances to generate extracted features of the plurality of utterances, creating a plurality of VR models from the extracted features of the plurality of utterances, and lossy-compressing the plurality of VR models to generate a plurality of lossy-compressed VR models. In one aspect, A-law compression and expansion are used. In another aspect, mu-law compression and expansion are used. In one aspect, the VR models are Hidden Markov Models (HMM). In another aspect, the VR models are Dynamic Time Warping (DTW) models.
In one aspect, a voice recognition (VR) system comprises a training module configured to extract features of a plurality of utterances to generate extracted features of the utterances, create a plurality of VR models from the extracted features of the utterances, and lossy-compress the plurality of VR models to generate a plurality of lossy-compressed VR models. In one aspect, the VR system further comprises a feature extraction module configured to extract features of a test utterance to generate extracted features of a test utterance, an expansion module configured to expand a lossy-compressed VR model from the plurality of lossy-compressed VR models to generate an expanded VR model, and a pattern-matching module that matches the extracted features of the test utterance to the expanded VR model to generate a recognition hypothesis.


REFERENCES:
patent: 5404422 (1995-04-01), Sakamoto et al.
patent: 5475792 (1995-12-01), Stanford et al.
patent: 5734793 (1998-03-01), Wang
patent: 6009387 (1999-12-01), Ramaswa

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

System and method for lossy compression of voice recognition... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with System and method for lossy compression of voice recognition..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and System and method for lossy compression of voice recognition... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3189403

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.