Motion vector coding method

Pulse or digital communications – Bandwidth reduction or expansion – Television or motion video signal

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

Reexamination Certificate

active

06785333

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to a motion vector coding method and an affine motion estimation method, and more particularly, to a motion vector coding method capable of producing a low bit rate bitstream and an affine motion estimation method capable of effectively performing coding on a small block.
2. Description of the Related Art
Recently, the International Telecommunication Union-Terminal Sector (ITU-T) has made efforts to standardize the H.26L protocol for next generation visual telephony. Since standard schemes such as H.261, H.263, and H.263+ that define a motion vector coding method for visual telephony were adopted as an ITU-T standard, technologies based on H. 263++ and Moving Picture Experts Group (MPEG)-4 standard schemes have been developed. Nonetheless, there remains a need for further improving coding efficiency for ultra low speed real-time applications and a short end-to-end delay. That is, it is highly desirable to have an estimation method and a motion vector coding method that provide for an improved frame rate at the same bit rate as that of the coding method of the H.263+ standard, or that significantly reduces a bit rate while maintaining the same image quality as that encoded by the H.263+ compliant coding method.
In one conventional motion estimation method, assuming that the pair (i,j) are the coordinates of a macroblock or a sub-block thereof, an affine motion estimation is performed to represent the motion of a pixel in an image using the following Equations (1a) and (1b):
v
X
(
i, j
)=
a
0
+a
1
i+a
2
j
  (1a)
v
Y
(
i, j
)=
a
3
+a
4
i+a
5
j
  (1b)
where v
X
(i,j) and v
Y
(i,j) are motion magnitude components of the X- and Y-axis directions for a pixel located in the block (i,j). The expression (v
X
(i,j), v
Y
(i,j))
T
, consisting of motion magnitude components v
X
(i,j) and v
Y
(i,j), is referred to as the motion vector of the pixel located at the coordinates (i,j). That is, the motion vector of each pixel is determined by each pixel location and six parameters (a
0
, a
1
, a
2
, . . . , a
5
)
T
. These parameters (a
0
, a
1
, a
2
, . . . , a
5
)T may be called affine motion parameters. However, according to a method of estimating motion using the affine motion parameters, as the number of bits representing the affine motion parameters increases, computation for a motion estimation becomes more complex and takes more time. Furthermore, with respect to some blocks, this affine motion estimation cannot be more effective than conventional translational motion estimation.
Meanwhile, standards such as H.261, H.263, MPEG-1, and MPEG-2 represent the motion of a pixel based on a translational motion model expressed by:
v
X
(
i,j
)=
t
1
  (2a)
v
Y
(
i,j
)=
t
2
  (2b)
As is evident from Equations (2a) and (2b), the motion vectors of all pixels in a block are fixed as one vector. However, in the case of affine motion, as expressed in Equations (1a) and (1b),a motion vector with respect to each pixel location is variable. The affine motion estimation is capable of representing complex motions that include any or all of translations such as rotation, magnification, reduction and shear, thereby allowing for more precise motion estimation.
To estimate the motion of an image using the affine motion estimation, the affine motion parameters as expressed in Equations (1a) and (1b) must be obtained on a block-by-block basis. The motion parameters correspond to displacements that minimize the difference between pixel values for a present image and a preceding motion-compensated image and are expressed by the following Equation (3):
arg



min
a
k


(
i
,
j
)

εM
k

{
I
n

(
i
,
j
)
-
I
n
-
1
(
i
+
v
X

(
i
,
j
)
+
v
Y

(
i
,
j
)
}
2
(
3
)
where I
n
(i,j) denotes the luminance of the pixel at location (i,j), and M
k
denotes the k
th
block. (v
X
(i,j), v
Y
(i,j))
T
is expressed by Equations (1a) and (1b). That is, motion parameters that minimize the luminance difference between a present block and a previous block motion-compensated by the motion parameters are expressed in Equation (3), and the motion parameters are called motion-estimated parameters.
In the affine motion estimation method according to the conventional art, the motion parameters are obtained using the following Equation (4):
a
k
l+1
=a
k
l
+u
k
l
  (4)
where the term a
k
l
=(a
0
, a
1
, a
2
, a
3
, a
4
, a
5
)
T
, and l denotes an iteration coefficient.
When l equals 0, the motion parameter is expressed by:
a
k
0
=(0,0,0,0,0,0)
T
In this case, the motion parameter is called an initial value.
u
k
1
=
{

(
i
,
j
)

ε



Mk

{
(
h
ij
1

(
h
ij
1
)
)
T
}
}
-
1

{

{
(
d
n
1

(
i
,
j
)
)
T

h
ij
1
}
}

h
k
l
=(
G
x
l
(
i,j
),
iG
x
l
(
i,j
),
jG
x
l
(
i,j
),
G
y
l
(
i,j
),
jG
y
l
(
i,j
)
T

d
n
l
(
i,j
)=
I
n
(
i,j
)−
I
n−1
(
i,j
)(
i+a
0
l
+a
1
l
i+a
2
l
i,j+a
3
l
+a
4
l
i+a
5
l
j
)  (8)
G
X
l
(
i,j
)={
I
n
(
i+
1
,j
)−
I
n−1
(
i,j
)(
i+a
0
l
+a
1
l
i+a
2
l
j
−1
,j+a
3
l
+a
4
l
i+a
5
l
j
}/2  (9a)
G
Y
l
(
i,j
)={
I
n
(
i,j
+1)−
I
n−1
(
i,j
)(
i+a
0
l
+a
1
l
i+a
2
l
j
−1
,j+a
3
l
+a
4
l
+a
5
l
j
−1}/2  (9b)
The method of estimating the affine motion parameters shown in Equations 4 through 9a and 9b is called a differential motion estimation method. The differential motion estimation method is mostly used in affine motion estimation.
In this case, for affine motion estimation, first, the iteration coefficient 1 is set to “0” and the square error is set at a maximum as possible. Next, the value obtained from Equation (6) is updated using Equations 6 through 9a and 9b, and then the updated value is substituted into Equation (4) to thereby obtain a
k
l+1
. Then, the difference between the present block and the motion-compensated previous block is obtained using the value a
k
l+1
and the following Equation:
SE

(
1
+
1
)
=

i
,
j
=
Mk

{
I
n

(
i
,
j
)
-
I
n
-
1
(
i
+
a
0
1
+
1
+
a
1
1
+
1

i
+
a
2
1
+
1

j
,
j
+
a
3
1
+
1
+
a
4
1
+
1

i
+
a
5
1
+
1

j
}
(
10
)
If SE(l+1) is less than SE(l), l increases by 1 and the above steps are repeatedly performed. If SE(l+1) is greater than SE(l), a
k
l
at that time is determined as an estimated motion parameter, and then repetitions of the motion estimation process are terminated.
However, the affine motion estimation method has a problem in that motion estimation performance for small-sized blocks is significantly degraded. Thus, although the affine motion estimation method exhibits an excellent image prediction over the translational motion estimation method, it cannot be applied to standards such as H.261, H.263, MPEG-1, and MPEG-4.
Meanwhile, the affine motion parameters are real numbers with decimal points. Thus, to use these parameters in actual video coding, they must be converted or quantized to fixed-point numbers.
In standards such as H.261, H.263, MPEG-1, and MPEG-4, to which a translational motion model is applied, motion information to be encoded in each block is predictively encoded using motion vectors, thereby reducing the number of bits generated per unit time in a bitstream during coding of the motion information. However, if an affine motion model is used, since each of the six motion parameters shown in Equations (1a) and (1b) is not affected by neighboring blocks, it is very difficult to predictively encode motion information as employed in the translational motion model. That is, a significantly large number of bits may be required in coding a motion parameter of an affine motion model. Thus, it is highly desirable to have a method of effectively

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Motion vector coding method does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Motion vector coding method, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Motion vector coding method will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3285704

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.