Data processing: speech signal processing – linguistics – language – Speech signal processing – For storage or transmission
Reexamination Certificate
1998-09-18
2001-12-11
Korzuch, William (Department: 2741)
Data processing: speech signal processing, linguistics, language
Speech signal processing
For storage or transmission
C704S221000
Reexamination Certificate
active
06330531
ABSTRACT:
BACKGROUND OF THE INVENTION
1. Technical Field
The present invention relates generally to speech encoding and decoding in mobile cellular communication networks and, more particularly, it relates to various techniques used with code-excited linear prediction coding to obtain high quality speech reproduction through a limited bit rate communication channel.
2. Related Art
Signal modeling and parameter estimation play significant roles in data compression, decompression, and coding. To model basic speech sounds, speech signals must be sampled as a discrete waveform to be digitally processed. In one type of signal coding technique, called linear predictive coding (LPC), the signal value at any particular time index is modeled as a linear function of previous values. A subsequent signal is thus linearly predictable according to an earlier value. As a result, efficient signal representations can be determined by estimating and applying certain prediction parameters to represent the signal.
For linear predictive analysis, neighboring speech samples are highly correlated. Coding efficiency can be improved by canceling redundancies by using a short term predictor to extract the formants of the signal. To compress speech data, it is desirable to extract only essential information to avoid transmitting redundancies. If desired, speech can be grouped into segments or short blocks, where various characteristics of the segments can be identified. “Good quality” speech may be characterized as speech that, when reproduced after having been encoded, is substantially perceptually indistinguishable from spoken speech. In order to generate good quality speech, a code excited linear predictive (CELP) speech coder must extract LPC parameters, pitch lag parameters (including lag and its associated coefficient), an optimal excitation (innovation) code-vector from a supplied codebook, and a corresponding gain parameter from the input speech. The encoder quantizes the LPC parameters by implementing appropriate coding schemes.
More particularly, the speech signal can be modeled as the output of a linear-prediction filter for the current speech coding segment, typically called frame (typical duration of about 10-40 ms), where the filter is represented by the equation:
A
(
z
)=1
−a
1
z
−1
−a
2
z
−2
− . . . −a
np
z
−np
and the n
th
sample can be predicted by
y
^
⁡
(
n
)
=
∑
k
=
1
np
⁢
a
k
*
y
⁡
(
n
-
k
)
where “np” is the LPC prediction order (usually approximately 10), y(n) is sampled speech data, and “n” represents the time index.
The LPC equations above describe the estimation of the current sample according to the linear combination of the past samples. The difference between them is called the LPC residual, there:
r
⁡
(
n
)
=
y
⁡
(
n
)
-
y
^
⁡
(
n
)
=
y
⁡
(
n
)
-
∑
k
=
1
np
⁢
α
⁢
⁢
a
k
⁡
(
k
)
A perceptual weighting W(z) filter based on the LPC filter that models the sensitivity of the human ear is then defined by:
W
⁡
(
z
)
=
A
(
z
/
γ
⁢
1
)
A
(
z
/
γ
⁢
2
)
⁢
⁢
where
⁢
⁢
⁢
0
<
γ
2
<
γ
1
≤
1
The LPC prediction coefficients a
1
, a
2
, . . . , a
p
are quantized and used to predict the signal, where “p” represents the LPC order.
After removing the correlation between adjacent signals, the resulting signal is further filtered through a long term pitch predictor to extract the pitch information, and thus remove the correlation between adjacent pitch periods. The pitch data is quantized and used for predictive filtering of the speech signal. The information transmitted to the decoder includes the quantized filter parameters, gain terms, and the quantized LPC residual from the filters.
The LPC residual is modeled by samples from a stochastic codebook. Typically, the codebook comprises N excitation code-vectors, each vector having a length L. According to the analysis-by-synthesis procedure, a search of the codebook is performed to determine the best excitation code-vector which, when scaled by a gain factor and processed through the two filters (i.e., long and short term), most closely restores the pitch and voice information. The resultant signal is used to compute an optimal gain (the gain corresponding to the minimum distortion) for that particular excitation vector and an error value. This best excitation code-vector and its associated gain provide for the reproduction of “good speech” as described above. An index value associated with the code-vector, as well as the optimal gain, are then transmitted to the receiver end of the decoder. At that point, the selected excitation vector is multiplied by the appropriate gain, and the signal is passed through the two filters to generate the restored speech.
To extract desired pitch parameters, the pitch parameters that minimize the following weighted coding error energy “d” must be calculated for each coding subframe, where one coding frame may be divided into several coding subframes for analysis and coding:
d=|T−&bgr;P
Lag
H−&agr;C
i
H|
2
where T is the target signal that represents the perceptually filtered input signal, and H is the impulse response matrix of the filter W(z)/A(z). P
Lag
is the pitch prediction contribution having pitch Lag “Lag” and prediction coefficient, or gain, “&bgr;” which is uniquely defined for a given lag, and C
i
is the codebook contribution associated with index “i” in the codebook and its corresponding gain “&agr;.” In addition, “i” takes values between 0 and N
c−l
, where N
c
is the size of the excitation codebook.
Thus, given a particular pitch lag Lag and gain &bgr;, a pitch prediction contribution can be removed from the LPC residual r(n). The resulting signal
&egr;(
n
)=
r
(
n
)+&dgr;(
n
)
is called the pitch residual. The coding of this signal determines the excitation signal. In a CELP codec, the pitch residual is vector quantized by selecting an optimum codebook entry (quantizer) that best matches:
&egr;(
n
)=&agr;c
i
(
n
)+&dgr;(
n
)
where c
i
(n) is the n
th
element of the i
th
quantizer, &agr; is the associated gain, and &dgr;(n) is the quantization error signal.
The codebook may be populated randomly or trained by selecting codebook entries frequently used in coding training data. A randomly populated codebook, for example, requires no training, or knowledge of the quantization error vectors from the previous stage. Such random codebooks also provide good quality estimation, with little or no signal dependency. A random codebook is typically populated using a Gaussian distribution, with little or no bias or assumptions of input or output coding. Nevertheless, random codebooks require substantial complexity and a significant amount of memory. In addition, random code-vectors do not accommodate the pitch harmonic phenomena, particularly where a long subframe is used.
One challenge in employing a random codebook is that a substantial amount of training is necessary to ensure “good” quality speech coding. For example, with a trained codebook, the code-vector distribution within the codebook is arranged to represent speech signal vectors. Conversely, a randomly populated codebook inherently has no such intelligent vector distribution. Thus, if the vectors happen to be distributed in an ineffective manner for encoding a given speech signal, undesirable large coding errors may result.
In a trained codebook, particular input vectors that represent the coded vector are selected. The vector having the shortest distance to other vectors within the grouping may be selected as an input vector. Upon partitioning the vector space into particular input vectors that represent each subspace, the coordinates of the representative vectors are input into the codebook. Although training avoids a codebook having disjoint and poorly organized vectors, there may be instances when the input vectors should represent very high or very low frequency speech (e.g., common female or male speech)
Conexant Systems Inc.
Farjami & Farjami LLP
Korzuch William
Lerner Martin
LandOfFree
Comb codebook structure does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Comb codebook structure, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Comb codebook structure will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2556175