Method and system for speech reconstruction from speech...

Data processing: speech signal processing – linguistics – language – Speech signal processing – For storage or transmission

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C704S203000

Reexamination Certificate

active

06725190

ABSTRACT:

FIELD OF THE INVENTION
This invention relates generally to speech recognition for the purpose of speech to text conversion and, in particular, to speech reconstruction from speech recognition features.
REFERENCES
In the following description reference is made to the following publications:
[1] Kazuhito Koishida, Keiichi Tokuda, Takao Kobayashi, Satoshi Imai, “
Celp Coding Based on Mel Cepstral Analysis
”, Speech ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing—Proceedings v 1 1995. IEEE, Piscataway, N.J. [See definition of Mel Cesptrum on page 33].
[2] Stylianou, Yannis Cappe, Olivier Moulines, Eric, “
Continuous probabilistic transform for voice conversion
”, IEEE Transactions on Speech and Audio Processing v 6 n 2 March 1998. pp131-142 [See page 137 defining the cepstral parameters c(i)].
[3] McAulay, R. J. Quatieri, T. F. “Speech
Analysis-Synthesis Based on a Sinusoidal Representation
”, IEEE Trans.Acoust. Speech, Signal Processing Vol. ASSP-34, No. 4, August 1986.
[4] L. B. Almeida, F. M. Silva, “
Variable-Frequency Synthesis: An improved Harmonic Coding Scheme
”, Proc ICASSP pp237-244 1984.
[5] McAulay, R. J. Quatieri, T. F. “
Sinusoidal Coding in Speech Coding and Synthesis
”, W. Kleijn and K. Paliwal Eds., Elsevier 1995 ch. 4.
[6] S. Davis and P. Mermelstein, “Comparison of parametric representations for monosyllabic word recognition in continuously spoken sentences”, IEEE Trans ASSP, Vol. 28, No. 4, pp. 357-366, 1980.
BACKGROUND OF THE INVENTION
All speech recognition schemes for the purpose of speech to text conversion start by converting the digitized speech to a set of features that are then used in all subsequent stages of the recognition process. These features, usually sampled at regular intervals, extract in some sense the speech content of the spectrum of the speech signal. In many systems, the features are obtained by the following three-step procedure:
(a) deriving at successive instances of time an estimate of the spectral envelope of the digitized speech signal,
(b) multiplying each estimate of the spectral envelope by a predetermined set of frequency domain window functions, wherein each window is non-zero over a narrow range of frequencies, and computing the integrals thereof, and
(c) assigning the computed integrals or a set of pre-determined functions thereof to respective components of a corresponding feature vector in a series of feature vectors.
The center of mass of successive weight functions are monotonically increasing. A typical example is the Mel Cepstrum, which is obtained by a specific set of weight functions that are used to obtain the integrals of the products of the spectrum and the weight functions at step (b). These integrals are called ‘bin’ values and form a binned spectrum. The truncated logarithm of the binned spectrum is then computed and the resulting vector is cosine transformed to obtain the Mel Cepstral values.
There are a number of applications that require the ability to reproduce the speech from these features. For example, the speech recognition may be carried out on a remote server, and at some other station connected to that server it is desired to listen to the original speech. Because of channel bandwidth limitation, it is not possible to send the original speech signal from the client device used as an input device to the server and from that server to another remote client device. Therefore, the speech signal must be compressed. On the other hand, it is imperative that the compression scheme used to compress the speech will not affect the recognition rate.
An effective way to do that is to simply send a compressed version of the recognition features themselves, as it may be expected that all redundant information has been already removed in generating these features. This means that an optimal compression rate can be attained. Because the transformation from speech signal to features is a many-to-one transformation, i.e. it is not invertible, it is not evident how the reproduction of speech from features can be carried out, if at all.
To a first approximation, the speech signal at any time can assumed to be voiced, unvoiced or silent. The voiced segments represent instances where the speech signal is nearly periodic. For speech signals, this period is called pitch. To measure the degree to which the signal can be approximated by a periodic signal, ‘windows’ are defined. These are smooth functions e.g. hamming functions, whose width is chosen to be short enough so that inside each window the signal may be approximated by a periodic function. The purpose of the window function is to discount the effects of the drift away from periodicity at the edges of the analysis interval. The window centers are placed at regular intervals on the time axis. The analysis units are then defined to be the product of the signal and the window function, representing frames of the signal. On each frame, the windowed square distance between the true spectrum and its periodic approximation may serve as a measure of periodicity. It is well known that any periodic signal can be represented as a sum of sine waves that are periodic with the period of the signal. Each sine wave is characterized by its amplitude and phase. For any given fundamental frequency (pitch) of the speech signal, the sequence of complex numbers representing the amplitudes and phases of the coefficients of the sine waves will be referred to as the “line spectrum”. It turns out that it is possible to compute a line spectrum for speech that contains enough information to reproduce the speech signal so that the human ear will judge it almost indistinguishable from the original signal (Almeida [4], McAuley et al. [5]). A particularly simple way to reproduce the signal from the sequence of line spectra corresponding to a sequence of frames, is simply to sum up the sine waves for each frame, multiply each sum by its window, add these signal segments over all frames to obtain segments of reconstructed speech of arbitrary length. This procedure will be effective if the windows sum up to a roughly constant time function.
The line spectrum can be viewed as a sequence of samples at multiples of the pitch frequency of a spectral envelope representing the utterance for the given instant. The spectral envelope represents the Fourier transform of the infinite impulse response of the mouth while pronouncing that utterance. The essential fact about a line spectrum is that if it represents a perfectly periodic signal whose period is the pitch, the individual sine waves corresponding to particular frequency components over successive frames are aligned, i.e. they have the precise same value at every given point in time, independent of the source frame. For a real speech signal, the pitch varies from one frame to another. For this reason, the sine waves resulting from the same frequency component for successive frames are only approximately aligned. This is in contrast to the sine waves corresponding to components of the discrete Fourier transform, which are not necessarily aligned individually from one frame to the next. For unvoiced intervals, a pitch equal to the Fourier analysis interval is arbitrarily assumed. It is also known that given only the set of absolute values of the line spectral coefficients, there are a number of ways to generate phases (McAuley [3], [5]), so that the signal reproduced from the line spectrum having the given amplitudes and the computed phases, will produce speech of very acceptable resemblance to the original signal.
Given any approximation of the spectral envelope, a common way to compute features is the so-called Mel Cepstrum. The Mel Cepstrum is defined through a discrete cosine transform (DCT) on the log Mel Spectrum. The Mel Spectrum is defined by a collection of windows, where the i
th
window (i=0,1,2, . . . ) is centered at frequency f(i) where f(i)=MEL(a·i) and f(i+1)>f(i). The f

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Method and system for speech reconstruction from speech... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Method and system for speech reconstruction from speech..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and system for speech reconstruction from speech... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3235177

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.