Palette-based classifying and synthesizing of auditory...

Data processing: speech signal processing – linguistics – language – Speech signal processing – Recognition

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C704S245000, C704S239000, C704S250000, C704S258000

Reexamination Certificate

active

07634405

ABSTRACT:
The subject invention leverages spectral “palettes” or representations of an input sequence to provide recognition and/or synthesizing of a class of data. The class can include, but is not limited to, individual events, distributions of events, and/or environments relating to the input sequence. The representations are compressed versions of the data that utilize a substantially smaller amount of system resources to store and/or manipulate. Segments of the palettes are employed to facilitate in reconstruction of an event occurring in the input sequence. This provides an efficient means to recognize events, even when they occur in complex environments. The palettes themselves are constructed or “trained” utilizing any number of data compression techniques such as, for example, epitomes, vector quantization, and/or Huffman codes and the like.

REFERENCES:
patent: 6064958 (2000-05-01), Takahashi et al.
patent: 6535851 (2003-03-01), Fanty et al.
patent: 6718306 (2004-04-01), Satoh et al.
patent: 6990453 (2006-01-01), Wang et al.
patent: 7319964 (2008-01-01), Huang et al.
patent: 2003/0112265 (2003-06-01), Zhang
patent: 2004/0002931 (2004-01-01), Platt et al.
patent: 2004/0122672 (2004-06-01), Bonastre et al.
patent: 2004/0181408 (2004-09-01), Acero et al.
patent: 2005/0102135 (2005-05-01), Goronzy et al.
patent: 2005/0131688 (2005-06-01), Goronzy et al.
patent: 2005/0160449 (2005-07-01), Goronzy et al.
patent: 2006/0020958 (2006-01-01), Allamanche et al.
Perry et al. “Belief function divergence as a classifier” WL Perry, HE Stephanou—Intelligent Control, 1991., Proceedings of the 1991 IEEE.
Lu, L., Zhang, H. and Jiang, H. (2002) “Content Analysis for Audio Classification and Segmentation”. IEEE Transactions on Speech and Audio Processing, 10 (7). 504-516.
B.J. Frey and N. Jojic. Learning the ‘epitome’ of an image. Technical Report PSI-2002- 14, University of Toronto Technical Report, 2002.
EIC—Search—Report—NPL.
A. Kapoor and S. Basu. The audio Epitome: a new representation for modleing and classifying auditory phenomena. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2005.
M. A. Casey, Reduced-Rank Spectra and Minimum-Entropy Priors as Consistent and Reliable Cues for Generalized Sound Recognition, Workshop for Consistent and Reliable Cues 2001, Aalborg, Denmark.
G. Guo, et al., Content-Based Audio Classification, IEEE Transactions on Neural Networks, vol. 14 (1), Jan. 2003.
N. Jojic, et al., Epitomic Analysis of Appearance and Shape, Proceedings of International Conference on Computer Vision 2003, Nice, France.
M. J. Reyes-Gomez, et al., Selection, Parameter Estimation and Discriminative Training of Hidden Markov Models for General Audio Modeling, Proceedings of International Conference on Multimedia and Expo 2003, Baltimore, USA.
T. Zhang, et al., Heuristic Approach for Generic Audio Data Segmentation and Annotation, Proceedings of ACM International Conference on Multimedia 1999, Orlando, USA.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Palette-based classifying and synthesizing of auditory... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Palette-based classifying and synthesizing of auditory..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Palette-based classifying and synthesizing of auditory... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-4136962

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.