Robust codebooks for vector quantization

Image analysis – Image compression or coding – Quantization

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C382S250000, C382S251000, C704S222000, C704S230000, C375S240220, C375S240290

Reexamination Certificate

active

06807312

ABSTRACT:

BACKGROUND
1. Field
This disclosure relates to vector quantization for the compression of image, video, speech, audio, or other data types, more particularly to methods for developing codebooks for vector quantization.
2. Background
Data compression techniques attempt to reduce the amount of information necessary to reconstruct the original entity, while still providing enough information to reconstruct the original entity. For example, image compression compresses the amount of data necessary to reconstruct an original image. Speech compression compresses the amount of data needed to compress speech. These of course are examples as compression can be applied to any kind of data.
Vector quantization is a lossy compression technique. Vector Quantization technique (or VQ) partitions the entire data space into a series of representative regions. Within each region, approximations are designated, referred to as codevectors. The regions and codevectors are developed through a training procedure, using typical data sets, such as typical speech patterns or typical images. A typical training procedure was originally proposed in 1980 by Linde, Buzo and Gray and is therefore sometimes referred to as LBG algorithm.
LBG algorithm uses relative occurrences of the patterns in the training images. Typically a large number of training data sets are used for training. Generally, this approach works well for typical patterns. However, rare data combinations may occur that are completely missed by the training set. The resulting codebook will perform very badly when those data combinations occur.
One solution is an approach referred to as Lattice VQ. This approach mathematically partitions the data space into equal regions and includes the rare data patterns. This will have a reasonable performance if the source is also uniformly distributed in the vector space. However, it performs very poorly if the source has a skewed distribution.
Therefore, an approach is needed that performs well with typical data sets, and also with the rare patterns as with the Lattice VQ.
SUMMARY
One aspect of the disclosure is a method for data compression. An encoder receives data vectors from the original data to be compressed. The encoder uses a vector quantization codebook to encode the data vectors into encoded vectors. The codebook is produced using a training set having a compound data set, where the compound data set includes real data vectors and artificial data vectors. The encoded vectors are indexed in the codebook and the indexes are transmitted across communication channels or transmitted to storage.
Another aspect of the encoder is the artificial data set. The artificial data set may include a uniformly distributed data set, a diagonal data set, or both.


REFERENCES:
patent: 5247348 (1993-09-01), Israelsen et al.
patent: 5506801 (1996-04-01), Tawel
patent: 5596659 (1997-01-01), Normile et al.
patent: 5600754 (1997-02-01), Gardner et al.
patent: 5721791 (1998-02-01), Maeda et al.
patent: 5751856 (1998-05-01), Hirabayashi
patent: 5822465 (1998-10-01), Normile et al.
patent: 6072910 (2000-06-01), Maeda et al.
patent: 6154572 (2000-11-01), Chaddha
patent: 6438258 (2002-08-01), Brock-Fisher et al.
patent: 6717990 (2004-04-01), Abousleman
Y. Linde, A. Buzo and R.M. Gray “An Algorithm for Vector Quantizer Design,” IEEE Trans. on Communications, vol. COM-28, No. 1, pp. 84-95, Jan. 1980.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Robust codebooks for vector quantization does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Robust codebooks for vector quantization, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Robust codebooks for vector quantization will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3299053

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.