Method for preventing dual-step half-pixel motion...

Image analysis – Image compression or coding – Including details of decompression

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C382S236000, C382S238000, C382S243000, C382S244000, C382S248000, C382S251000, C375S240200, C375S240160, C375S240190

Reexamination Certificate

active

06567557

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Technical Field
The present invention relates to computer and digital data compression, and more specifically to preventing rounding errors that can accumulate in MPEG-2 type decompression.
2. Description of the Prior Art
Digitized images require a large amount of storage space to store and a large amount of bandwidth to transmit. A single, relatively modest-sized image, having 480 by 640 pixels and a full-color resolution of twenty-four bits per pixel (three 8-bit bytes per pixel), occupies nearly a megabyte of data. At a resolution of 1024 by 768 pixels, a 24-bit color screen requires 2.3 megabytes of memory to represent. A 24-bit color picture of an 8.5 inch by 11 inch page, at 300 dots per inch, requires as much as twenty-five megabytes to represent.
Video images are even more data-intensive, since it is generally accepted that for high-quality consumer applications, images must occur at a rate of at least thirty frames per second. Current proposals for high-definition television (HDTV) call for as many as 1920 by 1035 or more pixels per frame, which translates to a data transmission rate of about 1.5 billion bits per second. This bandwidth requirement can be reduced somewhat if one uses 2:1 interleaving and 4:1 decimation for the “U” and “V” chrominance components, but 0.373 billion bits per second are still required.
Traditional lossless techniques for compressing digital image and video information, such as Huffman encoding, run length encoding and the Lempel-Ziv-Welch algorithm, are far from adequate to meet this demand. For this reason, compression techniques which can involve some loss of information have been devised, including discrete cosine transform techniques, adaptive discrete cosine transform techniques, and wavelet transform techniques. Wavelet techniques are discussed in DeVore, Jawerth and Lucier,
Image Compression Through Wavelet Transform Coding,
IEEE Transactions on Information Theory, Vol. 38, No. 2, pp. 719-746 (1992); and in Antonini, Barlaud, Mathieu and Daubechies,
Image Coding Using Wavelet Transform,
IEEE Transactions on Image Processing, Vol. 1, No. 2, pp. 205-220 (1992).
The Joint Photographic Experts Group (JPEG) has promulgated a standard for still image compression, known as the JPEG standard, which involves a discrete cosine transform-based algorithm. The JPEG standard is described in a number of publications, including the following incorporated by reference herein: Wallace,
The JPEG Still Picture Compression Standard,
IEEE Transactions on Consumer Electronics, Vol. 38, No. 1, pp. xviii-xxxiv (1992); Purcell,
The C
-
Cube CL
550
JPEG Image Compression Processor,
C-Cube Microsystems, Inc. (1992); and C-Cube Microsystems,
JPEG Algorithm Overview
(1992).
An encoder using the JPEG algorithm has four steps: linear transformation, quantization, run-length encoding (RLE), and Huffman coding. The decoder reverses these steps to reconstitute the image. For the linear transformation step, the image is divided up into 8*8 pixel blocks and a Discrete Cosine Transform is applied in both spatial dimensions for each block. The purpose of dividing the image into blocks is to overcome a deficiency of the discrete cosine transform algorithm, which is that the discrete cosine transform is seriously non-local. The imager is divided into blocks in order to overcome this non-locality by confining it to small regions, and doing separate transforms for each block. However, this compromise has a disadvantage of producing a tiled appearance (blockiness) upon high compression.
The quantization step is essential to reduce the amount of information to be transmitted, though it does cause loss of image information. Each transform component is quantized using a value selected from its position in each 8*8 block. This step has the convenient side effect of reducing the abundant small values to zero or other small numbers, which can require much less information to specify.
The run-length encoding step codes runs of same values, such as zeros, in items identifying the number of times to repeat a value, and the value to repeat. A single item like “eight zeros” requires less space torepresent than a string of eight zeros, for example. This step is justified by the abundance of zeros that usually result from the quantization step.
Huffman coding translates each symbol from the run-length encoding step into a variable-length bit string that is chosen depending on how frequently the symbol occurs. That is, frequent symbols are coded with shorter codes than infrequent symbols. The coding can be done either from a preset table or one composed specifically for the image to minimize the total number of bits needed.
Similarly to JPEG, the Motion Pictures Experts Group (MPEG) has promulgated two standards for coding image sequences. The standards are known as MPEG-1 and MPEG-2. The MPEG algorithms exploit the common fact of relatively small variations from frame to frame. In the MPEG standards, a full image is compressed and transmitted only once for every twelve frames. The JPEG standard is typically used to compress these “reference” or “intra” frames. For the intermediate frames, a predicted frame is calculated and only the difference between the actual frame and the, predicted frame is compressed and transmitted. Any of several algorithms can be used to calculate a predicted frame, and the algorithm is chosen on a block-by-block basis depending on which predictor algorithm works best for the particular block. Motion detection can be used in some of the predictor algorithms. MPEG 1 is described in detail in International Standards Organization (ISO) CD 11172.
Accordingly, for compression of video sequences the MPEG technique is one which treats the compression of reference frames substantially independently from the compression of intermediate frames between reference frames. The present invention relates primarily to the compression of still images and reference frames for video information, although aspects of the invention can be used to accomplish video compression even without treating reference frames and intermediate frames independently.
The above techniques for compressing digitized images represent only a few of the techniques that have been devised. However, none of the known techniques yet achieve compression ratios sufficient to support the huge still and video data storage and transmission requirements expected in the near future. The techniques also raise additional problems, apart from pure compression ratio issues. In particular, for real time, high-quality video image decompression, the decompression algorithm must be simple enough to be able to produce thirty frames of decompressed images per second. The speed requirement for compression is often not as extreme as for decompression, since for many purposes, images can be compressed in advance. Even then, however, compression time must be reasonable to achieve commercial objectives. In addition, many applications require real time compression as well as decompression, such as real time transmission of live events. Known image compression and decompression techniques which achieve high compression ratios, often do so only at the expense of requiring extensive computations either on compression or decompression, or both.
The MPEG-2 video compression standard is defined in ISO/IEC 13818-2 “Information technology—Generic coding of moving pictures and associated audio information: Video”. MPEG-2 uses motion compensation on fixed sized rectangular blocks of pixel elements (“macroblocks”) to use temporal locality for improved compression efficiency. The location of these “macroblocks” in the reference pictures is given on half pixel boundaries, and so requires an interpolation of pixel elements. Such interpolation is specified in the MPEG-2 standard, as follows:
case-A:
if ((!half_flag[0])&&(!half_flag[1]))
pel_pred[y][x]=pel_ref[y+int_vec[1]][x+int_vec
[0]];
case-B:
if (

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Method for preventing dual-step half-pixel motion... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Method for preventing dual-step half-pixel motion..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method for preventing dual-step half-pixel motion... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3070791

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.