Coded data generation or conversion – Digital code to digital code converters
Reexamination Certificate
2003-09-26
2004-12-28
Young, Brian (Department: 2819)
Coded data generation or conversion
Digital code to digital code converters
C704S222000, C707S793000
Reexamination Certificate
active
06836225
ABSTRACT:
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates generally to a vector quantization method, and in particular, to a method of rapidly searching for an optimum codeword for nearest neighbor vector quantization.
2. Description of the Related Art
Nearest neighbor vector quantization (NNVQ) is a quantization method that is most popularly used in a data compression and recognition field. At a given bit rate, such nearest neighbor vector quantization has less quantization error than scalar quantization such as pulse code modulation (PCM), differential pulse code modulation (DPCM) and adaptive differential pulse code modulation (ADPCM). That is, nearest neighbor vector quantization is popularly applied to the fields of speech coding, audio coding, image coding, and speech recognition.
To perform nearest neighbor vector quantization, an operation of searching for an optimum codeword in searching a codebook comprised of a plurality of codewords and quantizing an input vector through the search is required. The optimum codeword refers to a codeword where a quantization error with the input vector is minimized. Conventional methods proposed to search for an optimum codeword through the codebook search can be classified into the following three methods. A first method is a vector quantization method by full search, for performing nearest neighbor vector quantization on all codewords constituting a codebook, a second method is a vector quantization method using a partial distance search scheme, and a third method is a vector quantization method using a threshold.
First, a method for performing nearest neighbor vector quantization by full search, which corresponds to the first method, will be described herein below.
FIG. 1
is a flowchart illustrating a procedure for performing nearest neighbor vector quantization by full search. Referring to
FIG. 1
, parameter values ‘D
min
’, ‘i’ and ‘index’ for nearest neighbor vector quantization are initialized in step
110
. That is, the ‘D
min
’ parameter is set to a maximum value ‘max’, and the ‘i’ and the ‘index’ parameters are set to 0. ‘D
min
’ is a parameter for setting the minimum value among quantization errors between codewords {overscore (y
i
)} constituting a codebook and an input vector {overscore (x)}, and ‘i’ is a parameter for selecting one codeword from the codewords {overscore (y
i
)} in order to calculate a quantization error with the input vector {overscore (x)}. If it is assumed that the total number of the codewords constituting the codebook is N, then ‘i’ ranges from 1 to N. The parameter ‘index’ means an index indicating a codeword corresponding to the minimum quantization error value D
min
with the input vector {overscore (x)}. Therefore, ‘index’ will become a particular value among integers between 1 and N, which is the range of ‘i’.
If initialization of the parameters is completed, ‘i’ is compared with ‘N’ in step
112
to determine whether quantization error calculation for all the codewords is completed. If it is determined that quantization error calculation for all the codewords is not completed, the procedure proceeds to step
114
. The quantization error calculation is a procedure for determining an optimum codeword that minimizes a quantization error. In step
114
, ‘i’ is increased by 1 so that a codeword with the next index can be selected. If a codeword with the next index is selected in step
114
, a quantization error D
i
between the selected codeword and the input vector is calculated in step
116
. For example, a quantization error D
i
between an input vector {overscore (x)} with an order k and an i
th
codeword y
i
can be calculated by
d
⁡
[
x
_
,
y
i
]
=
D
i
=
1
k
⁢
(
x
_
-
y
i
)
⁢
(
x
_
-
y
i
)
T
=
1
k
⁢
∑
j
=
1
k
⁢
⁢
[
x
j
_
-
y
ij
]
2
(
1
)
where k means an order of the input vector {overscore (x)}, y
i
means an i
th
codeword, and {overscore (x
j
)} and y
ij
mean j
th
elements of the vector {overscore (x)} and the codeword y
i
, respectively.
If calculation of the D
i
corresponding to the current ‘i’ is completed in step
116
, it is determined in step
118
whether D
i
is larger than D
min
. If D
i
is larger than D
min
, the procedure returns to step
112
without changing the D
min
to repeatedly perform the steps
112
to
116
. However, if D
i
is not larger than D
min
, the procedure returns to step
112
after changing the value of D
i
to the value of D
min
and setting the ‘index’ to ‘i’ in step
120
. The reason for setting the value of D
i
to the value of D
min
is to constantly maintain D
min
as the minimum value, and at this moment, by setting the ‘i’ to the ‘index’, it is possible to designate a corresponding codeword.
The steps
114
to
120
are repeatedly performed until it is determined that the process of determining an optimum codeword for nearest neighbor vector quantization for all of N codewords in step
112
is completed. If the process of determining an optimum codeword for nearest neighbor vector quantization for all of N codewords is completed, D
min
has a minimum quantization error value and the parameter ‘index’ will designate a codeword corresponding thereto. Therefore, in step
122
, a value set to ‘index’ is designated as ‘best_index’, and the input vector {overscore (x)} is quantized by a codeword y
index
corresponding to the best_index.
In the first method stated above, a quantization error D
i
with an input vector is calculated for each of all codewords constituting a codebook. The first method has two fundamental problems. First, as an order k of the input vector is increased, complexity of operation needed for the codebook search is increased by geometric progression. Such a problem must be resolved first of all in order to perform nearest neighbor vector quantization on a real-time basis. Second, as an order k of the input vector is increased, a size of a memory required for storing codewords is increased by geometric progression.
For example, assuming that an order of a vector is k and a size of a codebook is N, if the full search scheme is applied, (k×N) multiplications and (2k×N−N) additions, subtractions and comparisons are required in order to quantize an input vector. In the full search scheme, a memory for storing a codebook, a size of the memory being proportional to a size of the codebook, is increased in size by geometric progression as an order k of a vector is increased.
Next, a vector quantization method using a partial distance search (PDS) scheme, which corresponds to the second method, will be described herein below. The vector quantization method using the partial distance search scheme was proposed in IEEE, vol. COM-33, pp. 1132-1133 in order to reduce the number of calculations required in the full search scheme. The entire content of this IEEE document is incorporated herein by reference. When calculating a quantization error between an input vector and each candidate codeword, the second method proposed in IEEE vol. COM-33, pp. 1132-1133 calculates the quantization error for r elements rather than all elements of a candidate vector (where r≦k). This is because while a quantization error for a particular codeword among a plurality of codewords is being calculated, if the quantization error is larger than a minimum quantization error value D
min
from a particular point r to a previous point, it is obvious that even though the quantization error for the particular codeword is calculated further, the quantization error will be larger than D
min
.
FIG. 2
is a flowchart illustrating a vector quantization procedure using the partial distance search scheme. Referring to
FIG. 2
, parameter values ‘D
min
’, ‘i’ and ‘index’ for nearest neighbor vector quantization are initialized in step
210
. That is, ‘D
min
’ is set to a maximum value ‘max’, and ‘i’ and ‘index’ are set to 0. If initialization of the parameters is completed, a parameter ‘r’ for counting the number of elements y
i
used during quantization error calculation for a particular codeword y
i
Chi Sang-Hyun
Hwang Seong-Kyu
Kang Sang-Won
Kim Yong-Serk
Lee Nam-Il
Nguyen John
Roylance Abrams Berdo & Goodman L.L.P.
Samsung Electronics Co,. Ltd.
Young Brian
LandOfFree
Fast search method for nearest neighbor vector quantization does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Fast search method for nearest neighbor vector quantization, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Fast search method for nearest neighbor vector quantization will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3307063