Apparatus and program for separating a desired sound from a...

Data processing: speech signal processing – linguistics – language – Audio signal bandwidth compression or expansion

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C704S501000

Reexamination Certificate

active

07076433

ABSTRACT:
A sound separation apparatus for separating a target signal from a mixed input signal, wherein the mixed input signal includes the target signal and one or more sound signals emitted from different sound sources. The sound separation apparatus according comprises a frequency analyzer for performing a frequency analysis on the mixed input signal and calculating spectrum and frequency component candidate points at each time. The apparatus further comprises feature extraction means for extracting feature parameters which are estimated to correspond with the target signal, comprising a local layer for analyzing local feature parameters using the spectrum and the frequency component candidate points and one or more global layers for analyzing global feature parameters using the feature parameters extracted by the local layer. The apparatus further comprises a signal regenerator for regenerating a waveform of the target signal using the feature parameters extracted by the feature extraction means.Since both of local feature parameters and global feature parameters can be processed together in the feature extraction means, the separation accuracy of the target signal is improved without depending on the accuracy for extracting feature parameters from the input signal. Feature parameters to be extracted include frequencies and amplitudes and their variation rates for the frequency component candidate points, harmonic structure, pitch consistency, intonation, on-set/off-set information and/or sound source direction. The number of the layers provided in the feature extraction means may be changed according to the types of the feature parameters to be extracted.

REFERENCES:
patent: 4885790 (1989-12-01), McAulay et al.
patent: 05-036081 (1993-02-01), None
patent: 07-167271 (1995-07-01), None
patent: 08-068560 (1996-03-01), None
Klapuri et al., “Robust Multipitch Estimation for the Analysis and Manipulation of Polyphonic Musical Signals,”Proc. of COST-G6 Conference on Digital Audio Effects, Dec. 7-9, 2000.
Tolonen “Methods for Separation of Harmonic Sound Sources using Sinusoidal Modeling,” in AES 106th Convention, Munich Germany, May 1999.
Tuomas Virtanen, et al. “Separation of Harmonic Sound Sources Using Sinusoidal Modeling”,Proceedings 2000 IEEE International Conference on Acoustics, Speech and Signal Processing, vol. 2, Jun. 2000, pp. 765-768.
Tomohiro Nakatani et al., “Harmonic Sound Stream Segregation Using Localization and its Application to Speech Stream Segregation”,Speech Communication, Elsevier Science B.V., vol. 27, No. 3-4, Apr. 1999, pp. 209-222.
Motosugu Abe et al., “Auditory Scene Analysis Based on Time-Frequency Integration of Shared FM and AM”,Acoustics, Speech and Signal Processing, Proc. 1998 IEEE International Conference, Seattle, Washington, May 1998, pp. 2421-2424.
Robert J. McAulay, “Speech Analysis/Synthesis Based on a Sinusoidal Representation”,IEEE Transactions on Acoustics, Speech and Signal Processing, vol. ASSP-34, No. 4, Aug. 1986, pp. 744-754.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Apparatus and program for separating a desired sound from a... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Apparatus and program for separating a desired sound from a..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Apparatus and program for separating a desired sound from a... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3590275

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.