Feature vector-based apparatus and method for robust pattern...

Data processing: speech signal processing – linguistics – language – Speech signal processing – Recognition

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

Reexamination Certificate

active

07054810

ABSTRACT:
N sets of feature vectors are generated from a set of observation vectors which are indicative of a pattern which it is desired to recognize. At least one of the sets of feature vectors is different than at least one other of the sets of feature vectors, and is preselected for purposes of containing at least some complimentary information with regard to the at least one other set of feature vectors. The N sets of feature vectors are combined in a manner to obtain an optimized set of feature vectors which best represents the pattern. The combination is performed via one of a weighted likelihood combination scheme and a rank-based state-selection scheme; preferably, it is done in accordance with an equation set forth herein. In one aspect, a weighted likelihood combination can be employed, while in another aspect, rank-based state selection can be employed. An apparatus suitable for performing the method is described, and implementation in a computer program product is also contemplated. The invention is applicable to any type of pattern recognition problem where robustness is important, such as, for example, recognition of speech, handwriting or optical characters under challenging conditions.

REFERENCES:
patent: 6535641 (2003-03-01), Baggenstoss
“Using Multiple Time Scales in a Multi-Stream Speech Recognition System”, S. Dupont et. al., Eurospeech '97, Greece, Sep. 1997 (proceedings pp. 3-6).
“Data-derived Non-linear Mapping for Feature Extraction in HMM”, Proceedings of the Workshop on Automatic Speech Recognition and Understanding, Colorado, Dec. 1999, H. Hermansky et al.
“Heterogenious Measurements and Multiple Classifiers for Speech Recognition” A. Halberstadt et al., ICSLP '98 (Sydney, Australia 1998).
“Spectral Subband Centroid Features for Speech Recognition,” K. Paliwal, ICASSP '98, Seattle, Washington, May 1998 (proceedings pp. 617-620).
“Tandem Connectionist Feature Extraction for Conventional HMM Systems” ICASSP 2000, Istanbul, Turkey, May 2000, H. Hermansky et al.
“Unified Decoding and Feature Representation for Improved Speech Recognition,” L. Jiang and X. Huang, Eurospeech '99, Budapest, 1999 (proceedings pp. 1331-1334).
“Non-stationary Multi-Channel (Multi Stream) Processing Towards Robust and Adaptive ASR,” H. Bourlard, pp. 1-10 of the Proceedings of the Workshop on Robust Methods for Speech Recognition in Adverse Conditions, which was held in Finland in 1999.
“Robust -methods for using context-dependent features and models in a continuous speech recognizer,” Bahl, et al., ICASSP 1994, vol. 1, pp. 533-536.
“Discriminative Model Combination,” P. Beyerlin, ICASSP '98, pp. 481-484, Seattle, May 1998.
“Enchanced Likelihood Computation Using Regression,” P. DeSouza et al., Eurospeech '99, pp. 1699-1702, Budapest, 1999.
“A Stochastic Segment Model for Phoneme based Continuous Speech Rocognition” by Ostendorf and Roukos, IEEE Transactions on Acoustics, Speech and Signal Processing, v. 37, n.12, Dec. 1989, at pp. 1857-1869.
“Multi-stream Speech Recognition: Ready for Prime Time?,” A. Janin et al., Eurospeech '99 pp. 591-594, Hungary 1999.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Feature vector-based apparatus and method for robust pattern... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Feature vector-based apparatus and method for robust pattern..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Feature vector-based apparatus and method for robust pattern... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3624466

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.