Object-based quad-tree mesh motion compensation method using...

Image analysis – Image compression or coding – Interframe coding

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C382S240000, C348S407100, C348S420100

Reexamination Certificate

active

06757433

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to a motion compensation method, and more particularly, to an object-based quad-tree mesh motion compensation method using a Greedy algorithm.
2. Description of the Related Art
A motion compensation technique, by which the amount of data is reduced by taking advantage of the redundancy of a moving picture on the time axis, is essential for moving picture encoding. In order to achieve motion compensation by which complicated or partial motion can be accurately represented, a conventional hierarchical grid interpolation (HGI) technique defines a quad-tree mesh structure for frame-unit moving picture encoding.
FIG. 1
is a block diagram schematically illustrating an HGI technique. First, quad-tree block segmentation is performed on a current image frame (I
t
) on the basis of the variance of frame difference (VFD) of each block in the frame, in a block
100
. To be more specific, first, a current image frame is segmented into square blocks having predetermined sizes. If the VFD of each block is greater than the reference value, the block is again divided into four square blocks having the same size. This process is repeated until the VFDs of all of the divided blocks are smaller than the reference value. Here, VFD denotes the variance of the frame difference between the current image frame (I
t
) and the previous image frame (I
t−1
).
Next, the motions of vertices are estimated from a quad-tree block segmentation result S
t
to minimize motion compensation errors within blocks, and quad-tree mesh motion compensation is performed on the image signals within the block by interpolation using the estimation, as in element
102
. As a consequence, a motion-compensated image (I
t
) and a motion vector M
t
corresponding to the current image frame (I
t
) are obtained.
FIGS.
2
(
a
) and
2
(
b
) show the quad-tree blocks of a previous frame matched with the current frame by a quad-tree mesh structure. Each block can be transformed, so that complicated motions can be accurately compensated. Here, bound vertices (for example,
1
,
2
, . . . ,
10
in FIG.
2
(
a
)) are defined to maintain the shape of each block to be rectangular, and motion vectors (for example, MV
1
through MV
3
in FIG.
2
(
b
)) are obtained by linear interpolation using the motion vector of two adjacent vertices. On the other hand, motion estimation is performed on control points (for example, o, p, . . . , y in FIG.
2
(
a
)), the motion vectors of which must be estimated, to minimize an image frame motion compensation error. The motion vector of each pixel within a block is obtained by linear interpolation, using the motion vector of each estimated vertex, thereby compensating for the motion within a block.
Segmented quad-tree information is encoded and transmitted as described below. While each node in a quad-tree starting from root nodes is visited in a breadth-first system, the nodes are encoded ‘0’ or ‘1’ depending on whether they are terminal nodes. However, a minimum-sized block does not need to be encoded with ‘0’. The motion vector of each of the control points is fixed length encoded.
A conventional motion compensation technique is for frame-based moving picture encoding, so that it is not suitable for object-based moving picture encoding. Block segmentation in a quad-tree structure reduces motion compensation error, but the transmission rate increases due to increments in quad-tree information and motion vector information caused by segmentation. Hence, the decision of the part of an image to be further segmented is related directly to the transmission rate-distortion performance, which is a scale indicating the effective utilization of a given transmission rate. However, in the prior art, quad-tree block segmentation and motion estimation are separately performed, so that the transmission rate-distortion performance is not directly considered.
SUMMARY OF THE INVENTION
An objective of the present invention is to provide a motion compensation method which defines an object-based quad-tree mesh structure capable of accurately compensating for complicated and partial motion.
Another objective of the present invention is to provide a block segmentation method in an object-based quad-tree mesh motion compensation method using the Greedy algorithm, by which the transmission rate-distortion performance of an object-based quad-tree mesh structure is improved.
To achieve the first objective, the present invention provides an object-based quad-tree mesh motion compensation method using the Greedy algorithm, the method including: (a) defining an object-based quad-tree mesh; (b) segmenting each block in an image frame, which is segmented into blocks of predetermined sizes, in order to form the object-based quad-tree mesh of the step (a); and (c) estimating the motions of vertices to minimize distortion during compensation of motions within the segmented block, and compensating for the motion of an image within the block. Here, the object-based quad-tree mesh is extensively defined so that it is suitable for an object-based technique, and quad-tree blocks are classified into virtual quadrature blocks not including any part of an object, and real quadrature blocks including part of an object, according to the type of object.
To achieve the second objective, the present invention provides a block segmentation method in an object-based quad-tree mesh motion compensation method using the Greedy algorithm, the method including: (a) forming blocks of predetermined sizes which surround an object within an image; (b) segmenting each block to form an object-based quad-tree mesh; (c) calculating the segmentation gains of the segmented blocks; (d) again segmenting the block having a maximum segmentation gain to form an object-based quad-tree mesh; (e) again calculating the segmentation gains of blocks which are affected by the segmentation of the step (d); and (f) returning to the step (d) if a current transmission rate is smaller than a given transmission rate.


REFERENCES:
patent: 5768434 (1998-06-01), Ran
patent: 6084908 (2000-07-01), Chiang et al.
patent: 6392705 (2002-05-01), Chaddha
“Classified Variable Block Size Motion Estimation Algorithm for Image Sequence Coding” by Huang et al. Image Processing, 1994. Proceedings. ICIP-94., IEEE International Conference, vol.: 3, Nov. 13-16, 1994 □□page(s): 736-740 vol. 3.*
“A Motion Estimation and Image Segmentation Technique Based on the Variable Block size” by Yeo et al. Acoustics, Speech, and Signal Processing, 1997. ICASSP-97., 1997 IEEE International Conference on, vol.: 4, Apr. 21-24 1997, Page(s): 3137-3140 vol. 4.*
“Motion Compensation for Video Compression Using Control Grid Interpolation” by Sullivan et al. Acoustics, Speech, and Signal Processing, 1991. ICASSP-91., 1991 International Conference on, Apr. 14-17, 1991 Page(s): 2713-2716 vol. 4.*
“A Novel Video Coding Scheme Based on Temporal Predication Using Digital Image Warping” by Nieweglowski et al. Consumer Electronics, IEEE Transactions on , vol.: 39 Issue: 3, Jun. 8-10, 1993, Page(s): 141-150.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Object-based quad-tree mesh motion compensation method using... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Object-based quad-tree mesh motion compensation method using..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Object-based quad-tree mesh motion compensation method using... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3349107

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.