Image analysis – Image compression or coding – Lossless compression
Reexamination Certificate
2000-04-28
2003-11-25
Wu, Jingge (Department: 2623)
Image analysis
Image compression or coding
Lossless compression
C382S238000, C348S411100, C375S240120
Reexamination Certificate
active
06654503
ABSTRACT:
FIELD OF THE INVENTION
This invention relates to selective compression of digital images.
BACKGROUND OF THE INVENTION
Compression of digital images using lossless schemes is an integral part of a wide variety of applications that include medical imaging, remote sensing, printing, and computers. Recent advances in digital electronics and electromechanics have also helped employment of digital images widely. The algorithms for compression (or coding) of images have become sophisticated, spurred by the applications and standardization activities such as JPEG (“Digital Compression and Coding of Continuous Tone Images”, ISO Document No. 10918-1). The lossy version of JPEG, introduced around 1990, gained an enormous following in the industry due to its simplicity, public domain software, efforts by the Independent JPEG Group (IJPEG), and availability of inexpensive custom hardware (C-Cube Microsystems). The lossless counterpart did not gain significant acceptance, but provided momentum in diversified research activities.
The primary approaches in lossless compression coding have used differential pulse code modulation (DPCM), followed by entropy coding of the residuals (W. Pennebaker and J. Mitchell, (
JPEG Still Image Compression Standard
, Van Nostrand Reinhold, New York, 1993). Recently, schemes that utilize transforms or wavelets have also been investigated and have gained acceptance (A. Zandi et al, “CREW: Compression with reversible embedded wavelets”, Proc. of Data Compression Conference, March 1995, pp. 212-221; F. Sheng et al, “Lossy and lossless image compression using reversible integer wavelet transforms”, Proc. I.E.E.E., 1998,
). However, the majority of the promising techniques have employed sophisticated DPCM and entropy coding techniques. These methods rely heavily on the statistical modeling of the data (source) (M. Weinberger et al, “On universal context modeling for lossless compression of gray scale images”, I.E.E.E. Trans. on Image Processing, 1996. Although such approaches have given excellent compression performance, they are cumbersome to implement and often inefficient as software programmable solutions implemented on digital signal processors (DSPs) or general purpose microprocessors. Efforts have been made to reduce the complexity of the statistical modeling portion in some of the best performing coders, CALIC (X. Wu et al, “Context-based, adaptive, lossless image coding”, I.E.E.E. Trans. on Communications, vol. 45, 1997, pp. 437-444), and LOCO (M. Weinberger et al, “LOCO-1: A low complexity, context-based lossless image compression algorithm”, Proc. of 1996 Data Compression Conference, 1996, pp. 140-149). Even with such efforts, the computational complexity is daunting. One primary reason for this is a context switch that occurs on a pixel boundary. This approach introduces several data dependent compute and control complexities in the encoder and the decoder.
What is needed is an image compression approach that reduces the computational complexity but retains many of the attractive features of the most flexible compression approaches. Preferably, the approach should allow selective uses of lossless compression and lossy compression for different portions of the same image, without substantially increasing the complexity that is present when only lossless compression or only lossy compression is applied to an image.
SUMMARY OF THE INVENTION
These needs are met by the invention, which provides a block-based coder that permits multiple levels of parallel implementation. The pixels in each input block are coded using a differential pulse code modulation (DPCM) scheme that uses one of several selectable predictors. The predictor for a block is chosen using local characteristics of the block to be coded. Prediction residuals (difference between actual and predicted values) are mapped to a non-negative integer scale and are coded using a new entropy-coded mechanism based on a modified Golomb Code (MGC). In addition, a novel run-length encoding scheme is used to encode specific patterns of zero runs. The invention permits parallel processing of data blocks and allows flexibility in ordering the blocks to be processed.
A block of data values is examined to determine if the data values are all the same. A dc-only block uses a selected predictor and is easily compressed for later use. A non-dc-only block is examined according to selected criteria, and an optimal predictor is selected for this block. A residual value (actual value minus predicted value) is computed and clamped, and the block of clamped values and corresponding predictor index are processed for compression, using an efficient mapping that takes advantage of the full dynamic range of the clamped residual values.
Context modeling can be included here without substantially increasing the computational complexity, by making the context switch granularity depend upon a “block” of pixels (e.g., P×Q), rather than on a single pixel, to allow inclusion of a transition region where a switch occurs. In some imaging applications, combinations of lossless and lossy techniques are combined to compress an image. For example, a portion of the image corresponding to a majority of text information might have to be losslessly coded, while the portion of the image with continuous-tone gray-scale information can be coded with some visual distortion to obtain higher compression. In such applications, the input image is segmented to identify the regions to be losslessly coded. Accordingly, lossy coders and lossless coders are switched on and off region-by-region. However, many of the lossy and lossless coders may work on entire images. The “chunking” by the segmentation algorithm makes it inefficient to code small blocks using the existing methods.
The approach disclosed here is applicable to mixed mode images that may contain graphics, text, natural images, etc. The context switch at the block levels can be adapted for lossy coding. Thus, one obtains a single coder format that fits both lossy and lossless cases and encompasses an image segmenter as well.
REFERENCES:
patent: 6005622 (1999-12-01), Haskell et al.
patent: 6075470 (2000-06-01), Little et al.
patent: 6148109 (2000-11-01), Boon et al.
patent: 6215905 (2001-04-01), Lee et al.
patent: 6292588 (2001-09-01), Shen et al.
Sriram Parthasarathy
Sudharsanan Subramania
LandOfFree
Block-based, adaptive, lossless image coder does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Block-based, adaptive, lossless image coder, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Block-based, adaptive, lossless image coder will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3180603