Image processing method, image processing apparatus, and...

Pulse or digital communications – Bandwidth reduction or expansion

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C382S232000, C382S236000, C382S238000

Reexamination Certificate

active

06259734

ABSTRACT:

FIELD OF THE INVENTION
The present invention relates to image processing methods, image processing apparatuses, and image processing media and, more particularly, to a method and an apparatus for performing motion compensation according to the operation load when subjecting an image signal to inter-frame predictive decoding or inter-frame predictive coding. The invention also relates to a data storage medium which contains a program implementing such image signal decoding or coding by software.
BACKGROUND OF THE INVENTION
In order to store or transmit digital image data with efficiency, it is necessary to compressively encode the digital image data. As a typical method for compressively coding digital image data, there is discrete cosine transformation (DCT) represented by JPEG (Joint Photographic Experts Group) or MPEG (Moving Picture Experts Group). Besides, there are waveform coding methods such as sub-band coding, wavelet coding, and fractal coding.
Further, in order to eliminate redundant image data between adjacent frames (images), inter-frame predictive coding using motion compensation is carried out. To be specific, a pixel value (pixel data) of a pixel in the present frame is expressed by using a difference between this pixel value and a pixel value (pixel data) of a pixel in the previous frame, and this difference value (difference data) is subjected to waveform coding.
A brief description will be given of an image coding method and an image decoding method, based on MPEG1 of the like, including DCT with motion compensation.
In the image coding method, initially, input image data corresponding to one frame to be coded (image space corresponding to one frame) is divided into image data corresponding to a plurality of macroblocks (image spaces each having the size of 16×16 pixels), and the image data are compressively coded macroblock by macroblock. To be specific, the image data corresponding to one macroblock is further divided into image data corresponding to four subblocks (image spaces each having the size of 8×8 pixels), and the image data are subjected to DCT and quantization, subblock by subblock, to generate quantized coefficients. This coding process is called “intra-frame coding”.
At the receiving end, the quantized coefficients corresponding to the respective subblocks are subjected to inverse quantization and inverse DCT to reproduce image data corresponding to each macroblock.
Meanwhile, there is an image data coding method called “intra-framing coding”. In this coding method, initially, from a frame (reference frame) which is temporally adjacent to a frame (target frame) including a target macroblock to be subjected to coding, an area comprising 16×16 pixels and having a smallest error in image data from the target macroblock is detected as a prediction macroblock, by a motion detecting method such as block matching. At this time, displacement data indicating a displacement of the prediction macroblock from the target macroblock is detected as a motion vector. Then, image data of the prediction macroblock is obtained from image data of a past frame (i.e., a frame which has already been coded) by motion compensation based on the detected motion vector.
Next, a difference in image data between the target macroblock and the prediction macroblock is obtained as difference data, and the difference data is subjected to DCT in units of 8×8 pixels to obtain DCT coefficients, and further, the DCT coefficients are quantized to obtain quantized coefficients.
Then, the quantized coefficients and the motion vector are transmitted or stored. This coding process is called “inter-frame coding”.
The inter-frame coding has two prediction modes as follows: a prediction mode in which image data of a target macroblock included in a frame which is presently processed (present frame) is predicted only from image data of a previous frame which is previous to the present frame in the display order; and a prediction mode in which image data of a target macroblock is predicted from image data of two frames which are previous and subsequent to the present frame in the display order. The former is called “forward prediction mode” and the latter is called “bidirectional prediction mode”.
At the receiving end, the quantized coefficients are restored to the difference data in the space domain by inverse quantization and inverse DCT. Thereafter, image data of the prediction macroblocks is obtained by motion compensation based on the motion vector, and the difference data and the image data of the prediction macroblock are added to reproduce image data of the target macroblock.
In order to increase the prediction efficiency, in other words, in order to minimize the difference (prediction error) between the image data of the target macroblock and the image data of the prediction macroblock, the motion compensation, i.e., the process to obtain the image data of the prediction macroblock in accordance with the motion vector, is performed with precision of ½ pixel.
However, since the input image data is composed of pixel values (pixel data) in units of while pixels, prediction data of ½ pixel precision must be generated by interpolation of pixel value between adjacent pixels within the reference frame. Further, when generating the prediction data of ½ pixel precision, the value of the motion vector has 0.5 pixel precision.
Although it is assumed that the quantization, DCT and the like are performed in units of 8×8 pixels in the above description, the processing unit is not restricted to 8×8 pixels. For example, those processes may be performed in units of 7×1 pixels, Hence, generally, the quantization, DCT, and the like can be performed in units of g×h pixels (g,h=positive integers). Further, although the macroblock comprises 16×16 pixels in the above description, the macroblock may comprise M×N pixels (M,N=positive integers), generally.
However, in the following description, for simplification, both the macroblock and the subblock are regarded as image spaces each comprising K×K pixels (K=positive integer). That is, it is premised that the coding, decoding, quantization, inverse quantization, DCT, and inverse DCT are performed in units of K×K pixels. Therefore, hereinafter a macroblock is simply refereed to as “a block”.
FIG. 17
is a flowchart for explaining process steps in the conventional image decoding method including motion compensation.
First of all, coded image data which has been obtained by compressively coding image data by the above-mentioned coding method and then variable-length coding the compressed data, is input block by block (step S
71
).
Next, the coded image data corresponding to a target block is analyzed to be separated into quantized DCT coefficients (quantized coefficients), quantization scale, and motion vector, and these are respectively converted from variable-length codes to corresponding numerical values to be output (step S
72
).
Thereafter, the quantized coefficients are subjected to inverse quantization and inverse DCT in units of K×K pixels, and difference data in a space domain corresponding to the target block and comprising KK pieces of values (pixel data) are output (step S
73
).
Next, prediction data for the target block is generated from image data of the reference frame by motion compensation. When generating prediction data of ½ pixel precision, reference pixel values more than K×K are obtained from the reference frame.
That is, in the conventional decoding method, prediction data having ½ pixel precision in both the horizontal and vertical directions is generated as follows. Initially, K′×K′ pixels are obtained from the position of a pixel specified according to the integer parts of the values of the motion vector in the reference frame (step S
74
), and the K′×K′ pixel values so obtained are subjected to interpolation, such as bilinear interpolation, to generate prediction

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Image processing method, image processing apparatus, and... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Image processing method, image processing apparatus, and..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Image processing method, image processing apparatus, and... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2468109

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.