Image analysis – Image compression or coding – Quantization
Reexamination Certificate
1999-06-09
2002-08-20
Tran, Phuoc (Department: 2621)
Image analysis
Image compression or coding
Quantization
C382S224000, C382S225000
Reexamination Certificate
active
06438268
ABSTRACT:
FIELD OF THE INVENTION
The present invention relates to vector quantisation data compression and in particular to a method of constructing a codebook for use in vector quantisation data compression and to codebooks so constructed.
BACKGROUND OF THE INVENTION
When storing or transmitting large amounts of digital data it is often desirable to be able to compress the data to a significant extent to reduce storage space or transmission time. This need for compression is particularly acute when dealing with digital video which tends to involve extremely large amounts of data which must be handled at high speed (particularly for real-time applications such as video phone communication).
One technique used for compressing digital video is known as ‘vector quantisation’, where a codebook of reference patches (e.g. relatively small portions taken from one or more ‘library’ or ‘archetypal’ images) is utilised. In the simplest form of this technique, each image to be compressed is partitioned into a number of image patches and a matching (i.e. similar) reference patch selected for each image patch from the codebook by comparing patches on a pixel by pixel basis. The codebook index for each chosen reference patch is stored, together with the corresponding position vectors of the image patches (i.e. their position in the original image), to provide a compressed representation of the image. Provided that a copy of the codebook is available, an approximation to the original image can be constructed by using the stored codebook indices to recover the required set of reference patches and inserting these into an image frame using the respective stored image patch position vectors. The achievable degree of compression is a function of the size of the image patches into which the image is partitioned, larger patches allowing higher compression.
Simple vector quantisation techniques require an exhaustive search of the codebook for each image patch, a process which is extremely computationally expensive where large codebooks are used to maximise the quality of the compressed images and where large patches, necessitating a large number of pixel comparisons, are generated to maximise the compression ratio. One technique used to reduce this search overhead is known as ‘hierarchical vector quantisation’ (HVQ). This technique is described in ‘Hierarchical Vector Quantisation of Perceptually Weighted Block Transforms’; N Chadda, M Vishwanath, and P A Chou; Proceedings of the Data Compression Conference; IEEE, 1995, pp3-12.
To illustrate HVQ, consider that it is desired to compress an individual digitised frame of a black and white video sequence where the frame is composed of an array of pixels each having an intensity value in the form of a byte-integer. suppose that the frame is divided into patches formed by pairs of horizontally adjacent pixels (so that each pair is represented by a pair of intensity values or a 2-D vector [i,j]) and a codebook B
0
is provided which contains a number of reference patches, each reference patch consisting of a pair of horizontally adjacent pixels and the number of reference patches in the codebook (e.g. 256) being considerably less than the total number of possible pixel pair patches. An index table T
0
is provided which maps every possible value of the 2-D vectors to associated codebook addresses, such that T
0
[i,j] addresses the closest entry in B
0
(i.e. B
0
[T
0
[i,j]]) to the vector [i,j]. Finding the closest entry in the codebook to a given input vector can then be determined simply by looking up the table T
0
.
Consider now that the frame to be compressed is sub-divided into patches of 2×2 pixels so that each patch is represented by a four dimensional vector of intensity values [i,j,k,l]. Each of these 4-D vectors is split into two 2-D -vectors and, using tables T
0
, can be mapped to a further 2-D vector [T
0
[i,j], T
0
[k,l]]. Suppose that a second codebook B
1
contains a number of 4-D vectors [p,q,r,s] which correspond to a set of four pixel patches (where again the number of patches is considerably less than the total number of possible four bit patches). A second level index table T
1
-is constructed such that B
1
[T
1
[T
0
[i,j], T
0
[k,l]]] is the closest entry in B
1
to [B
0
[T
0
[i,j]],B
0
[T
0
[k,l]]]. Finding the closest entry in the codebook B
1
to a given input vector can then be determined by looking up the table T
0
for pixel pairs [i, j] and [k, l], and then applying the resultant 2-D vectors T
o
[i, j], T
o
[k, l] to the table T
1
. This process is illustrated in FIG.
1
.
In a non-HVQ process, it is necessary to provide a codebook containing 4-D vectors and, for each 4-D vector (or patch) in an image to be compressed, to compare each entry in that vector against the corresponding entry in each vector in the codebook. Such an exhaustive search process requires n×m comparison operations to find a matching vector, when n is the number of elements in the vector and m is the number of entries in the codebook. The HVQ process in contrast requires n−1 look-up steps to find a matching vector. Given that comparisons and look-ups are of approximately the same computational cost, it will be appreciated that n×m is much greater than n−1 for reasonable values of m and that the HVQ look-up process will be approximately m times faster than the exhaustive search.
For each further level of the index table (T
2
,T
3
etc) added in the HVQ process, the compression ratio is increased further. It is noted that in practice, only the final level codebook B need be retained as the intermediate codebooks are not essential to carrying out the HVQ process.
Hierarchical vector quantisation may be used to compress colour images by separating out the three components of colour images (e.g. red, blue, green) and compressing them separately, and recombining the three sets of codebook reference patches to produce a decompressed image.
It will be clear that in order to use HVQ effectively, it is necessary to construct a series of tables T
0,1 . . . m
and codebooks B
0,1 . . . m
where m is log
2
of the number of pixels in the largest patch used for compression. The conventional HVQ approach is to develop codebooks first, e.g. by extracting reference patches from a number of archetypal image frames, and then to derive the tables from the codebooks.
Since the patches at the final level m are twice the size of the patches at level m−1, the index table T
m
is constructed by taking all possible pairs of patches at level m−1 and by conducting an exhaustive search of patches in the codebook at level m to identify the patch at level m which is most similar. Thus, if the level m−1 patches
7
and
13
when placed together most closely resemble level m patch
100
, T
m
[
7
,
13
] would be set to 100. This process is propagated back thorough the levels to create a codebook and an index table at each level.
In this approach, the selection of level m codebook patches taken from the archetypal image frames is essentially arbitrary. The result is that certain ones of these patches may never or only infrequently be selected as a match for a patch from an image to be compressed, resulting in an inefficient codebook.
SUMMARY OF THE INVENTION
It is an object of the present invention to overcome or at least mitigate disadvantages of conventional vector quantisation codebooks.
It is a second object of the present invention to provide a method of generating a vector quantisation codebook, wherein the entries in the codebook are used with substantially equal frequency when compressing data whose statistical properties mirror those of the training data used to construct the codebook.
According to a first aspect of the present invention there is provided a method of constructing a vector quantisation cod
Cockshott William Paul
Lambert Robert Bartholomew
Alston & Bird LLP
Tran Phuoc
University of Strathclyde
LandOfFree
Vector quantization codebook generation method does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Vector quantization codebook generation method, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Vector quantization codebook generation method will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2897761