Motion information coding and decoding method

Television – Bandwidth reduction system – Data rate reduction

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C348S405100

Reexamination Certificate

active

06825885

ABSTRACT:

FIELD OF THE INVENTION
The present invention relates to a method of coding motion information associated to a video sequence divided into successive frames, comprising the steps of:
subdividing the current frame into bidimensional blocks;
for each current block of said current frame, selecting in a previous frame, by means of a block-matching algorithm, a shifted block as the prediction of said current block, the motion vector between said shifted and current blocks being the predicted vector associated to said current block and all the motion vectors similarly predicted for a whole current frame constituting a motion vector field associated to said current frame;
for each current frame, coding by means of a differential encoding technique, including for each motion vector to be coded a predictor associated to it, the motion information constituted by said associated motion vector field.
The invention also relates to a corresponding encoding device, to a method of decoding motion information coded according to this coding method, and to a corresponding decoding device. In the detailed description of one implementation of the invention, that will be given later, the bidimensional blocks are for instance macroblocks, as defined in the standards of the MPEG family.
BACKGROUND OF THE INVENTION
The coding schemes proposed for digital video compression generally use motion estimation and compensation for reducing the temporal redundancy between the successives frames of the processed video sequence. In such methods, a set of motion vectors is determined at the encoding side and transmitted to the decoder. Most video coding standards use for the motion estimation operation the so-called block matching algorithm (BMA), described for example in the document “MPEG video coding: a basic tutorial introduction”, S. R. Ely, BBC Research and Development Report, 1996. Said algorithm, depicted in
FIG. 1
, tries to find for each block B
c
of a current image I
t
the block B
r
of a previous reference image I
t−1
, that best matches, said previous block being only searched in a limited area of this previous image (or search window SW) around the position of the bloc B
c
. The set of motion vectors thus determined in the encoder for each block B
c
of the current frame must be sent to the decoder.
In order to minimize the bitrate needed to transmit the motion vectors, these vectors are generally differentially encoded with reference to previously determined motion vectors (or predictors). More precisely, the encoding of the motion vectors describing the motion from previous blocks B
r
to current ones B
C
is realized by means of a predictive technique based on previously transmitted spatial neighbours. The motion vectors are differenced with respect to a prediction value and coded using variable length codes.
SUMMARY OF THE INVENTION
It is a first object of the invention to propose a method for coding motion vectors that includes an improved prediction of these motion vectors.
To this end, the invention relates to a coding method such as defined in the introductory part of the description and which is moreover characterized in that, for each current block, the predictor used in the subtraction operation of said differential encoding technique is, a spatio-temporal predictor P obtained by means of a linear combination defined by a relation of the type:
P=&agr;·S+&bgr;·T
where S and T are spatial and temporal predictors respectively, and (&agr;, &bgr;) are weighting coefficients respectively associated to said spatial and temporal predictors.
In an advantageous implementation of the invention, the criterion for the choice of the weighting coefficients is to minimize the distortion between the motion vector C to be coded and its predictor P in the least means square sense, i.e. to minimize the following operator:
F=&Sgr;[C
−(&agr;·
S+&bgr;·T
)]
2
,
where the summation is done on the entire motion vector field, i.e. for all the blocks of the current frame.
Preferably, the spatial predictor is obtained by applying a median filtering on a set of motion vector candidates chosen in the neighbourhood of the current block, said set of motion vector candidates comprising three motion vector candidates if a spatial prediction compliant with the MPEG-4 standard is required.
The temporal predictor may be determined either by re-using the spatial predictor already determined for the motion vector of the current block to point to the block inside the previously transmitted motion vector field, or by keeping in memory the spatial predictor candidates used during the computation of the spatial predictor, pointing with them from the corresponding blocks in the current image to blocks of the previous image whose motion vectors may be viewed also as spatial predictors for the temporal predictor to be determined, and implementing a median filtering of these spatial predictors inside the previous motion vector field, the obtained result being said temporal predictor to be determined.
It is another object of the invention to propose a method of decoding motion information coded by means of said coding method.
To this end, the invention relates to a method of decoding motion information corresponding to an image sequence and which has been previously, before a transmission and/or storage step, coded by means of a coding method comprising the steps of:
subdividing the current image into bidimensional blocks;
for each current block of the current image, selecting in a previous image, by means of a block-matching algorithm, a shifted block as the prediction of said current block, the motion vector between said shifted and current blocks being the predicted vector associated to said current block and all the motion vectors similarly predicted for a whole current image constituting a motion vector field associated to said current image;
for each current image, coding the motion information constituted by said associated motion vector field, the motion vector C to be coded for each current block being approximated by a spatio-temporal predictor P obtained by means of a linear combination defined by a relation of the type:
P=&agr;·S+&bgr;·T
where S and T are spatial and temporal predictors respectively, and (&agr;, &bgr;) are weighting coefficients respectively associated to said spatial and temporal predictors, said decoding method being characterized in that it comprises two types of decoding step:
for the first motion vector field of the sequence, a first type of decoding step only based on spatial predictors;
for the other motion vector fields, a second type of decoding step comprising a computation of the spatio-temporal predictor P on the basis of the motion vectors of the previous motion vector field already decoded, spatial predictors defined in the neighbourhood of the current motion vector to be decoded, and the transmitted weighting coefficients &agr; and &bgr;.


REFERENCES:
patent: 5574663 (1996-11-01), Ozcelik et al.
patent: 0415491 (1990-08-01), None
patent: WO9746022 (1997-05-01), None
“True motion estimation with 3D recursive block matching”, by G. de Haan and al., IEEE Transactions on Circuits and Systems for Video Technology, vol. 3, No. 5, Oct. 1993, pp. 368-379.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Motion information coding and decoding method does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Motion information coding and decoding method, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Motion information coding and decoding method will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3293401

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.