Fast JPEG huffman encoding and decoding

Coded data generation or conversion – Digital code to digital code converters – To or from number of pulses

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C341S051000, C341S061000, C341S106000, C341S067000, C341S079000, C341S108000, C358S426010, C358S438000, C358S426010, C358S438000, C382S246000, C382S232000, C382S199000

Reexamination Certificate

active

06373412

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention generally relates to image compression for diverse applications and, more particularly, in combination with a structure for storing Discrete Cosine Transform (DCT) blocks in a packed format, performing Huffman entropy encoding and decoding in accordance with the JPEG (Joint Photographic Experts Group) standard.
2. Description of the Prior Art
Pictorial and graphics images contain extremely large amounts of data and, if digitized to allow transmission or processing by digital data processors, often requires many millions of bytes to represent respective pixels of the image or graphics with good fidelity. The purpose of image compression is to represent images with less data in order to save storage costs or transmission time and costs. The most effective compression is achieved by approximating the original image, rather than reproducing it exactly. The JPEG standard, discussed in detail in “JPEG Still Image Data Compression Standard” by Pennebaker and Mitchell, published by Van Nostrand Reinhold, 1993, which is hereby fully incorporated by reference, allows the interchange of images between diverse applications and opens up the capability to provide digital continuous-tone color images in multi-media applications.
JPEG is primarily concerned with images that have two spatial dimensions, contain gray scale or color information, and possess no temporal dependence, as distinguished from the MPEG (Moving Picture Experts Group) standard. JPEG compression can reduce the storage requirements by more than an order of magnitude and improve system response time in the process. A primary goal of the JPEG standard is to provide the maximum image fidelity for a given volume of data and/or available transmission or processing time and any arbitrary degree of data compression is accommodated. It is often the case that data compression by a factor of twenty or more (and reduction of transmission or processing time by a comparable factor) will not produce artifacts which are noticeable to the average viewer.
One of the basic building blocks for JPEG is the Discrete Cosine Transform (DCT). An important aspect of this transform is that it produces uncorrelated coefficients. Decorrelation of the coefficients is very important for compression because each coefficient can be treated independently without loss of compression efficiency. Another important aspect of the DCT is the ability to quantize the DCT coefficients using visually-weighted quantization values. Since the human visual system response is very dependent on spatial frequency, by decomposing an image into a set of waveforms, each with a particular spatial frequency, it is possible to separate the image structure the eye can see from the image structure that is imperceptible. The DCT thus provides a good approximation to this decomposition to allow truncation or omission of data which does not contribute significantly to the viewer's perception of the fidelity of the image.
In accordance with the JPEG standard, the original monochrome image is first decomposed into blocks of sixty-four pixels in an 8×8 array at an arbitrary resolution which is presumably sufficiently high that visible aliasing is not produced. (Color images are compressed by first decomposing each component into an 8×8 pixel blocks separately.) Techniques and hardware is known which can perform a DCT on this quantized image data very rapidly, yielding sixty-four DCT coefficients. Many of these DCT coefficients for many images will be zero (which do not contribute to the image in any case) or near-zero which can be neglected or omitted when corresponding to spatial frequencies to which the eye is relatively insensitive. Since the human eye is less sensitive to very high and very low spatial frequencies, as part of the JPEG standard, providing DCT coefficients in a so-called zig-zag pattern which approximately corresponds to an increasing sum of spatial frequencies in the horizontal and vertical directions tends to group the DCT coefficients corresponding less important spatial frequencies at the ends of the DCT coefficient data stream, allowing them to be compressed efficiently as a group in many instances.
While the above-described discrete cosine transformation and coding may provide significant data compression for a majority of images encountered in practice, actual reduction in data volume is not guaranteed and the degree of compression is not optimal, particularly since equal precision for representation of each DCT coefficient would require the same number of bits to be transmitted (although the JPEG standard allows for the DCT values to be quantized by ranges that are coded in a table). That is, the gain in compression developed by DCT coding derives largely from increased efficiency in handling zero and near-zero values of the DCT coefficients although some compression is also achieved through quantization that reduces precision. Accordingly, the JPEG standard provides a second stage of compression and coding which is known as entropy coding.
The concept of entropy coding generally parallels the concept of entropy in the more familiar context of thermodynamics where entropy quantifies the amount of “disorder” in a physical system. In the field of information theory, entropy is a measure of the predictability of the content of any given quantum of information (e.g. symbol) in the environment of a collection of data of arbitrary size and independent of the meaning of any given quantum of information or symbol. This concept provides an achievable lower bound for the amount of compression that can be achieved for a given alphabet of symbols and, more fundamentally, leads to an approach to compression on the premise that relatively more predictable data or symbols contain less information than less predictable data or symbols and the converse that relatively less predictable data or symbols contain more information than more predictable data or symbols. Thus, assuming a suitable code for the purpose, optimally efficient compression can be achieved by allocating fewer bits to more predictable symbols or values (that are more common in the body of data and include less information) while reserving longer codes for relatively rare symbols or values.
As a practical matter, Huffman coding and arithmetic coding are suitable for entropy encoding and both are accommodated by the JPEG standard. One operational difference for purposes of the JPEG standard is that, while tables of values corresponding to the codes are required for both coding techniques, default tables are provided for arithmetic coding but not for Huffman coding. However, some particular Huffman tables, although they can be freely specified under the JPEG standard to obtain maximal coding efficiency and image fidelity upon reconstruction, are often used indiscriminately, much in the nature of a default, if the image fidelity is not excessively compromised in order to avoid the computational overhead of computing custom Huffman tables.
It should be appreciated that while entropy coding, particularly using Huffman coding, guarantees a very substantial degree of data compression if the coding or conditioning tables are reasonably well-suited to the image, the encoding, itself, is very computationally intensive since it is statistically based and requires collection of statistical information regarding a large number of image values or values representing them, such as DCT coefficients. Conversely, the use of tables embodying probabilities which do not represent the image being encoded could lead to expansion rather than compression if the image being encoded requires coding of many values which are relatively rare in the image from which the tables were developed even though such a circumstance is seldom encountered.
It is for this reason that some Huffman tables have effectively come into standard usage even though optimal compression and/or optimal fidelity for the degree of compression utilized will not be achieved. C

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Fast JPEG huffman encoding and decoding does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Fast JPEG huffman encoding and decoding, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Fast JPEG huffman encoding and decoding will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2906070

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.