Method of estimating motion in interlaced video

Pulse or digital communications – Bandwidth reduction or expansion – Television or motion video signal

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C348S699000

Reexamination Certificate

active

06449312

ABSTRACT:

FIELD OF THE INVENTION
This invention relates to a method of estimating motion in interlaced video, and especially to a method of estimating motion in interlaced video to enable efficient coding of the interlaced video to take place.
BACKGROUND OF THE INVENTION
In a typical video coding system, the video input is in the form of consecutive frames, with each frame having a number of pixels making up each image frame. Although each frame could be individually coded and transmitted to a receiver where it can be decoded and displayed, a key point for improving video coding efficiency is to achieve bit rate saving while maintaining a given image quality. To achieve this, there are several different techniques within current video coding technologies that try to reduce the number of bits that need to be coded and transmitted. These are:
(i) motion estimation (ME) and motion compensation (MC) mode selection;
(ii) field/frame based Discrete Cosine Transformation (DCT) mode selection;
(iii) quantisation driven by rate control algorithm; and
(iv) entropy coding.
As a central component of these key operations, motion estimation and compensation seeks to remove the temporal-domain redundancy present in natural video sources. Therefore motion estimation becomes a very important part of the coding process. However motion estimation is also the most computational complex element in a video coding system, especially for real-time full-motion video communication which requires a real-time implementation of the video encoder with a large motion search window. The search window of motion displacements can be as large as ±128 pixels in the MPEG-2 Main Profile and Main Level standard, as described in ISO/IEC Standard 13818-2, Generic coding of moving pictures and associated audio information: Video 1995, and can be much larger in high-resolution and high-quality entertainment video applications.
It will therefore be appreciated that a commercial implementation of real-time block matching motion estimation (ME) involves considering memory bandwidth (the required amount of data fetching from frame buffer to the ME processor), a search pattern to be used (the motion searching complexity within a given search window or fast search algorithm), a matching metric to be used (the basic matching complexity for each matching operation or block matching metrics), and the suitability of the process for real-time hardware implementations (the ability of the process to be efficiently pipelined).
Motion estimation involves matching of a block in a current frame with a block in a reference or previous frame. A full-search block matching technique using original pixel values examines all possible motion displacements, known as motion vectors (MV), within the search window to find the best match in terms of a predetermined distortion criterion. Due to the heavy computation and extensive data fetching required between the frame buffer and the processor performing the motion estimation, a full-search motion estimation using pixel intensities is only used for motion search in relatively small search ranges. Thus, many fast searching schemes have been developed to reduce computation and data fetching. Some of these are described by J. Feng, K. T Lo and H. Mehrpour, in “MPEG Compatible Adaptive Block Matching Algorithm for Motion Estimation”, published in Australian Telecommunication Networks and Applications Conference, December 1994; by J. N Kim and T. S Choi, in “A Fast Motion Estimation for Software Based Real-Time Video Coding”, published in IEEE Transactions on Consumer Electronics, Vol 45, No 2, pp 417-425, May 1999; by B. Liu and A. Zaccarin, in “A Fast Algorithm for the Estimation of Block Motion Vectors”, published in IEEE Transactions of Circuits and Systems for Video Technology, Vol 3, pp 730-741, April 1993; and by J. Chalidabhongse and C. C. Jay Kuo, in “Fast Motion Estimation Using Multiresolution-Spatio-Temporal Correlations”, published in IEEE Transactions of Circuits and Systems for Video Technology, Vol 7, No.3, pp 477-488, June 1997.
To complicate matters, in interlaced video, alternate lines of each frame are generated at different times. For example the odd lines, forming one field, and the even lines, forming another field, are generated separately, even though the odd and even lines are interlaced with each other. Because of the time difference between generation of the two fields, looking for positional changes in a block between one frame and another causes complications. To deal with interlaced video data format, both MPEG-2 and MPEG-4 standards have functionalities to code full CCIR Rec. 601 resolution (i.e. 720X480 @60 HZ interlaced field rate). As defined in the MPEG 2 standard, video format follows the Rec. 601 interlaced form. Therefore, further consideration must be given to deal with interlaced format video. A simple approach would be to merge each pair of fields to form a frame. However, as the two fields are generated at different times ({fraction (1/50)}th of a second apart for Rec. 601 @25 Hz), moving objects are in different places in each field and so do not merge well. Another approach would be to code the even and odd fields separately.
The decision to code in frame mode or in field mode can be made at the macro block level. A macro block, which has dimensions of 16×16 pixels, is composed of even field lines and odd field lines arranged in an interlaced format. To find the best mode of coding, results of motion estimation in the frame mode is compared with results of motion estimation in the field mode. The decision to code either in frame or field mode depends on both error residue and motion vectors (MV) produced by motion estimation.
The present invention therefore seeks to provide a method of determining a good matching reference image block from a plurality of reference image blocks in a search window of a reference image to provide a motion vector for a current image block in a current image in a video having interlaced fields, which overcomes, or at least reduces the above-mentioned problems of the prior art.


REFERENCES:
patent: RE35093 (1995-11-01), Wang et al.
patent: 5488419 (1996-01-01), Hui et al.
patent: 5565922 (1996-10-01), Krause
patent: 5761398 (1998-06-01), Legall
patent: 6026195 (2000-02-01), Eifrig et al.
patent: 6108039 (2000-08-01), Linzer et al.
patent: 6137837 (2000-10-01), Nemiroff et al.
patent: 2002/0025001 (2002-02-01), Ismaeil et al.
Li et al., “Reliable Motion Detection/Compensation for Interlaced Sequences and Its Applications to Deinterlacing”, IEEE Trans on Circuits and Systems for Video Technology, vol. 10, No. 1, pp. 23-29, Feb. 2000.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Method of estimating motion in interlaced video does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Method of estimating motion in interlaced video, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method of estimating motion in interlaced video will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2889498

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.