Pulse or digital communications – Bandwidth reduction or expansion – Television or motion video signal
Reexamination Certificate
1999-06-07
2001-09-11
Le, Vu (Department: 2613)
Pulse or digital communications
Bandwidth reduction or expansion
Television or motion video signal
C348S699000
Reexamination Certificate
active
06289052
ABSTRACT:
CROSS REFERENCE TO RELATED APPLICATIONS
The present application is related to U.S. patent applications respectively identified as Ser. No. 09/326,874 and entitled: “Methods and Apparatus for Context-Based Perceptual Quantization,” and as Ser. No. 09/326,872 and entitled “Methods And Apparatus For Context-based Inter/Intra Mode Selection,” both filed concurrently with the present application on Jun. 7, 1999.
FIELD OF THE INVENTION
The invention relates to video compression and, more particularly, to motion estimation methods and apparatus in a video compression system.
BACKGROUND OF THE INVENTION
Hybrid coding methods are widely used to efficiently represent video sequences, where temporal prediction is first performed to reduce temporal redundancy in a video sequence and the resultant prediction errors are then encoded. Such coding approaches have been described in technical literature such as, for example, in:
Draft of MPEG
-2
: Test Model
5, ISO/IEC JTC1/SC29/WG11, April 1993;
Draft of ITU-T Recommendation H.
263, ITU-T SG XV, December 1995; A. N. Netravali and B. G. Haskell,
Digital Pictures: Representation, Compression, and Standards
, 2
nd
Ed., Plenum Press, 1995; and B. Haskell, A. Puri, and A. N. Netravali,
Digital Video: An Introduction to MPEG
-2, Chapman and Hall, 1997, the disclosures of which are incorporated herein by reference. It is well known that motion compensated prediction following motion estimation removes temporal redundancy very effectively. There have been many proposed motion estimation algorithms, most of which are based on block matching. Such motion estimation approaches have been described in technical literature such as, for example, in: J. R. Jain and A. K. Jain, “Displacement Measurement And Its Application In Interframe Image Coding,”
IEEE Trans. Communications
, vol. COM-29, pp.1799-1808, December 1981; H. G. Musmann, P. Pirsch, and H. J. Gralleer, “Advances In Picture Coding,”
Proc. IEEE
, vol. 73, no. 4, pp.523-548, Aprail 1985; R. Srinivassan and K. R. Rao, “Predictive Coding Based On Efficient Motion Estimation,”
IEEE Trans. Communications
, vol. COM-33, no.8, pp.888-896, August 1985; and N. D. Memon and K. Sayood, “Lossless Compression Of Video Sequences,”
IEEE Trans. Communications
, vol. 44, no. 10, pp.1340-1345, October 1996, the disclosures of which are incorporated herein by reference. The block matching algorithm attempts to find a block in the reference frame which best matches the current block in terms of mean squared difference or mean absolute difference. The block matching approach has been adopted in many video compression standards since it is easy to implement and provides good estimation performance.
A decoder requires motion vector information for the current block or knowledge of all the data used in the motion estimation performed at the encoder to decode the block properly. However, samples used by these motion estimation algorithms are not available at the decoder. Therefore, these prior art motion estimation algorithms require sending overhead bits pertaining to motion vector information for the current block to a corresponding decoder. The burden of sending overhead bits pertaining to motion vector information to the decoder can be extremely heavy, particularly when block matching is performed on a small block or on a pixel basis. Thus, it would be highly advantageous to have a motion estimation process that does not require motion vector information to be transmitted to a decoder.
SUMMARY OF THE INVENTION
The present invention provides for motion estimation wherein a motion vector is generated for a current block of a current frame of a video sequence based only on previously reconstructed samples. Particularly, rather than using the current block as the basis for locating a block in a reference frame that best estimates the current block, as is done during conventional motion estimation, the present invention uses only previously reconstructed representations of samples adjacent to the current block, i.e., neighboring samples, as the basis for locating previously reconstructed samples in the reference frame that best estimate the previously reconstructed samples associated with the current block. A motion vector is then generated from the previously reconstructed samples identified in the reference frame. The motion vector may be used to retrieve a motion compensated block in the reference frame from which a predictor signal may then be generated.
Advantageously, by performing motion estimation in this inventive manner, motion vector information does not need to be transmitted by an encoder to a decoder for the current block. That is, since all previously reconstructed samples are available at the decoder at the time a bit stream representing the encoded current block is received at the decoder, the decoder does not require motion vector information pertaining to the current block. The decoder thus performs motion estimation in a similar manner as is done at the encoder, that is, using previously reconstructed samples already available. As a result, transmission bandwidth and/or storage capacity is saved.
In one aspect of the invention, a method of generating a motion vector associated with a current block of a current frame of a video sequence includes searching at least a section of a reference frame to identify previously reconstructed samples from the reference frame that best estimate motion associated with previously reconstructed samples from the current frame associated with the current block. The previously reconstructed samples may form sets of samples. Such sets may be in the form of respective templates. A template having only previously reconstructed samples is referred to as a causal template. The method then includes computing a motion vector identifying the location of the previously reconstructed samples from the reference frame that best estimate the previously reconstructed samples from the current frame associated with the current block. It is to be appreciated such a motion estimation technique is performed in both a video encoder and video decoder so that motion vector data does not need to be transmitted from the encoder to the decoder.
REFERENCES:
patent: 5619268 (1997-04-01), Kobayashi et al.
patent: 5734737 (1998-03-01), Chang et al.
patent: 5748761 (1998-05-01), Chang et al.
patent: 5818535 (1998-10-01), Asnis et al.
patent: 6154578 (2000-11-01), Park et al.
Tsukamoto et al., “Pose estimation of human face using synthesized model images”, ICIP, vol. 3, pp. 93-97, Nov. 1994.*
Evans, A.N., “Full field motion estimation for large non rigid bodies using correlation/relaxation labelling”, Sixth Intern. Conf. on Image Proc. and Its Appl., vol. 2, pp. 473-477, Jul. 1997.*
J.R. Jain et al., “Displacement Measurement and Its Application in Interframe Image Coding,” IEEE Transactions on Communications, vol. COM-29, No. 12, pp. 1799-1808, Dec. 1981.
R. Srinivasan et al., “Predictive Coding Based on Efficient Motion Estimation,” IEEE Transactions on Communications, vol. COM-33, No. 8, pp. 888-896, Aug. 1985.
N.D. Memon et al., “Lossless Compression of Video Sequences,” IEEE Transactions on Communications, vol. 44, No. 10, pp. 1340-1345, Oct. 1996.
Faryar Alireza Farid
Sen Moushumi
Yang Kyeong Ho
Le Vu
Lucent Technologies - Inc.
Ryan & Mason & Lewis, LLP
LandOfFree
Methods and apparatus for motion estimation using causal... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Methods and apparatus for motion estimation using causal..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Methods and apparatus for motion estimation using causal... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2513795