Image analysis – Image compression or coding – Pyramid – hierarchy – or tree structure
Reexamination Certificate
2000-03-03
2004-08-03
Johns, Andrew W. (Department: 2621)
Image analysis
Image compression or coding
Pyramid, hierarchy, or tree structure
C382S233000, C382S248000, C375S240110
Reexamination Certificate
active
06771828
ABSTRACT:
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates in general to processing digital data, and in particular, to a system and method for progressively transform coding image data using hierarchical lapped transforms for compression of the image data.
2. Related Art
Digital images are widely used in several applications such as, for example, imaging software, digital cameras, Web pages and digital encyclopedias. Usually it is necessary to compress the digital images due to storage constraints and the desire to decrease access or download time of the picture. Higher compression of a digital image means that more digital images can be stored on a memory device (such as diskette, hard drive or memory card) and these images can be transferred faster over limited bandwidth transmission lines (such as telephone lines). Thus, efficient and effective compression of images is highly important and desirable.
One of the most popular and widely used techniques of image compression is the Joint Photographic Experts Group (JPEG) standard. The JPEG standard operates by mapping an 8×8 square block of pixels into the frequency domain by using a discrete cosine transform (DCT). Coefficients obtained by the DCT are divided by a scale factor and rounded to the nearest integer (a process known as quantizing) and then mapped to a one-dimensional vector via a fixed zigzag scan pattern. This one-dimensional vector is encoded using a combination of run-length encoding and Huffman encoding.
Although JPEG is a popular and widely used compression technique, it has several disadvantages. For example, one disadvantage of JPEG is that at low bit rates the DCT produces irregularities and discontinuities in a reconstructed image (known as tiling or blocking artifacts). Blocking artifacts cause the boundary between groups of 8×8 blocks of pixels to become visible in the reconstructed image. These blocking artifacts cause an undesirable degradation in image quality. Another disadvantage of JPEG is that JPEG cannot perform image reconstruction that is progressive in fidelity. In other words, if an image is encoded at a certain fidelity and a lower fidelity is later desired (for example, due to limited bandwidth or storage availability), the image must be decoded and re-encoded.
In order to overcome these shortcomings of JPEG, most modern image compression techniques use a wavelet transform technique followed by a quantization and entropy encoding. Wavelet transform (WT) is preferred over the DCT used in JPEG because WT does not have blocking artifacts and WT allows for image reconstruction that is progressive in resolution. Moreover, WT leads to better energy compaction and thus better distortion/rate performance than the DCT. WT-based compression provides compression ratios that typically are from 20% to 50% better than the JPEG standard. In fact, the performance of the WT over the DCT is so superior that all current compression techniques being considered for the JPEG-2000 standard use WT-based compression.
Most current WT-based compression techniques decompose an image into coefficients and use some form of entropy encoding (such as adaptive Huffman encoding or arithmetic encoding) of the coefficients to further compress the image. These types of encoding, however, can be quite complex and use, for example, complex symbol tables (such as in adaptive Huffman encoding) or complex data structures (such as zerotree data structures) that depend on the data types. Thus, most current WT-based techniques are complex and difficult to implement.
At least one type of WT-based compression techniques, a progressive WT-based compression technique, includes the advantages of not requiring the use of data-dependent data structures (such as zerotrees) or complex symbol tables. This progressive WT-based compression uses entropy encoding of quantized wavelet coefficients and then uses a simple data reordering structure to cluster most of the large and small wavelet coefficients into separate groups. This reordering of the wavelet coefficients is performed in a pattern that is data-independent. Moreover, this progressive WT-based compression encodes the bit planes of the reordered wavelet coefficients using an encoder that does not require complex symbol tables, such as, for example, adaptive run-length and Rice-Golomb encoders. These features make progressive WT-based compression simpler to implement than other WT-based compression techniques, such as JPEG2000.
However, progressive WT-based compression still may be difficult to implement in some applications. In particular, DCT processing of 8×8 pixel blocks (as used in the current JPEG standard, for example) has been optimized in many software and hardware implementations, but is not used in WT-based compression. Thus, in order to implement progressive WT-based compression, new software or new hardware modules must be developed and installed to perform computation of the required wavelet transforms. This additional cost and time associated with implementation can reduce the attractiveness of progressive WT-based compression.
Accordingly, there exists a need for a progressive image compression technique that is efficient, simple and easier to implement into existing hardware and software. This progressive image compression technique would retain the advantages of progressive WT-based compression and the JPEG compression standard without any of the disadvantages. Specifically, this progressive image compression technique would use the same 8×8 pixel blocks used in the JPEG standard but would not produce blocking artifacts. This would allow the progressive image compression technique to leverage existing JPEG hardware and software, therefore providing much simpler and inexpensive implementation than current WT-based compression techniques. Moreover, the progressive image compression would use data-independent reordering structures to further simplify implementation. Whatever the merits of the above-mentioned systems and methods, they do not achieve the benefits of the present invention.
SUMMARY OF THE INVENTION
To overcome the limitations in the prior art as described above and other limitations that will become apparent upon reading and understanding the present specification, the present invention is embodied in a system and method for compressing image data using a lapped biorthogonal transform (LBT). The present invention encodes data by generating coefficients using a hierarchical LBT, reorders the coefficients in a data-independent manner into groups of similar data, and encodes the reordered coefficients using adaptive run-length encoding. The hierarchical LBT computes multiresolution representations. The use of the LBT allows the present invention to encode image data in a single pass at any desired compression ratio and to make use of existing discrete cosine transform (DCT) software and hardware modules for fast processing and easier implementation.
The present invention provides several advantages over current Joint Photographic Experts Group (JPEG) and wavelet-based compression technologies. In particular, unlike JPEG compression, the present invention does not produce blocking artifacts even though, in a preferred embodiment, the present invention uses 8×8 block discrete cosine transform (DCT) as an intermediate step for computing LBT blocks. Moreover, the present invention does not use wavelets and is faster than wavelet-based compression. The present invention does not use zerotrees or other data-dependent data structures, so that implementation of the present invention into hardware or software is simplified.
In general, the system of the present invention includes a transformation module, which generates transform coefficients using a LBT and a DCT, a quantization module, which approximates scaled coefficients by integers, a reordering module, which reorders the coefficients into groups of similar data, and an encoding module, which uses adaptive run-length encoding to encode the reordered coefficients
Dang Duy M.
Fischer Craig S.
Johns Andrew W.
Lyon & Harr L.L.P.
Microsoft Corporation
LandOfFree
System and method for progessively transform coding digital... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with System and method for progessively transform coding digital..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and System and method for progessively transform coding digital... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3347319