Pulse or digital communications – Bandwidth reduction or expansion – Television or motion video signal
Reexamination Certificate
1997-09-19
2001-03-20
Rao, Andy (Department: 2613)
Pulse or digital communications
Bandwidth reduction or expansion
Television or motion video signal
C375S240170, C375S240160
Reexamination Certificate
active
06205178
ABSTRACT:
BACKGROUND OF THE INVENTION
This invention relates to the digital video coding technology.
It is known that the motion compensation utilizing the correlation between the frames adjacent from time point of view produces a great compression effect on the high-efficiency coding of digital videos. Therefore, the international standard H.261, H.263, MPEG1, and MPEG2 for the present video coding adopt the motion compensation called the block matching in which the video image to be coded is divided into a plurality of square blocks and a motion vector is detected for each block. An algorithm for these international standard systems is described in, for example, “The Latest MPEG Textbook” supervised by Hiroshi Fujihara, August, 1994.
FIG. 1
shows the general concept of the block matching. Referring to
FIG. 1
, reference numeral
101
represents the original image of a frame (current frame) being coded, and
102
the decoded image (reference image) of the already-coded frame near the current frame with respect to time.
In the block matching, the original image
101
is divided into a plurality of blocks G
i,j
(i and j indicate the horizontal block number and the vertical block number, respectively). The normally used block size is formed of 16 vertical pixels and 16 horizontal pixels. Then, motion estimation is performed for each block between the original image
101
and the reference image
102
. The motion estimation is made in the block matching as follows. The block P
i,j
(0,0) at the position corresponding to that of the block G
i,j
is moved in parallel by any amount and in all directions on the reference image
102
, and a motion vector is detected which shows the parallel movement providing the minimum difference between the image within the block G
i,j
on the original image
101
and the image within the a block after the movement on the reference image
102
. In
FIG. 1
,
103
represents one of the divided blocks, and the motion vector of this block
103
is detected. This block
103
is represented by G
i,j
. The block,
104
indicated by P
i,j
(u,v) is the block that is specified by the above motion estimation and that provides the minimum difference as described above. The block P
i,j
(u,v) results from the parallel movement of the block P
i,j
(0,0), which is at the position corresponding to that of the block G
i,j
, by u pixels in the horizontal direction and by v pixels in the vertical direction. In addition, an arrow
105
indicates a motion vector MV
i,j
(u,v) detected by the motion estimation relative to the block G
i,j
. In the block matching, the above motion estimation is performed for all divided blocks G
i,j
on the original image
101
, and the motion vector MV
i,j
(u,v) is detected for each block G
i,j
.
Moreover, each block P
i,j
(0,0) on the reference image
102
at the position corresponding to that of each block G
i,j
on the original image
101
is moved on the basis of each detected motion vector MV
i,j
(u,v) for each block G
i,j
, and the images within the blocks P
i,j
after movement are collected at the positions before the movement to synthesize a predicted image
106
.
Thus, the original image is divided into a plurality of blocks, motion estimation is performed for each of the divided blocks between the original image and the reference image, detecting each motion vector, and the predicted image is synthesized from the detected motion vectors of the respective blocks and the reference image. This motion compensation is called the local motion compensation (LMC). The predicted image synthesized by the local motion compensation is termed the predicted image of LMC.
The above-mentioned block matching is a kind of the local motion compensation, and is a special case in which only the parallel movement of blocks are considered in the motion estimation. The local motion compensation is not limited to the block matching, but includes the motion compensation which takes account of the combination of the parallel movement of blocks and the deformation of blocks in the motion estimation. Since the block matching takes only the parallel movement into account, the motion vector detected by the motion estimation is only one for each block. However, since the latter motion compensation considers the combination of the parallel movement and the deformation, the motion vector detected by the motion estimation includes a plurality of vectors for each block. The image within the block after movement/deformation in the local motion compensation is referred to as the LMC block image.
The above-mentioned local motion compensation is the motion compensation for detecting the local movement within the image. On the other hand, it is reported that the image sequence involving the pan of camera and zoom operation as in the sportscast can be effectively processed by the global motion compensation for the whole image (for example, see “Global Motion Compensation in Video Coding” written by Kamikura and others, The Journal of The Institute of Electronics, Information and Communication Engineers of Japan, Vol. J76-B-1, No. 12, pp 944-952, December, 1993). This motion compensation is called the global motion compensation (GMC).
FIG. 2
shows an example of the global motion compensation. Referring to
FIG. 2
, there are shown an original image
201
of current frame, a reference image
202
for the original image
201
, a patch
203
in the case where the whole original image
201
is regarded as one region, and grid points
204
,
205
,
206
,
207
of the patch. In the global motion compensation, motion estimation is carried out over the whole image between the original image
201
and the reference image
202
. When motion estimation is made in the global motion compensation, the patch
203
is moved in parallel or deformed at will on the reference image
202
or the patch
203
is both moved in parallel and deformed, and the motion vector indicating the movement/deformation is detected in order to provide the minimum difference, in the image within the patch after the movement/deformation, between the original image
201
and the reference image
202
. In
FIG. 2
, reference numeral
208
represents the patch after the movement/deformation providing the minimum difference. At this time, the grid points
204
,
205
,
206
and
207
are moved to the points
209
,
210
,
211
and
212
, respectively, and four motion vectors can be obtained as the motion vectors to the patch
203
as illustrated in
FIG. 2
by the arrows, which are associated with the grid points
204
to
207
.
In addition, the patch
203
is moved/deformed by moving the grid points
204
to
207
of the patch
203
on the basis of the detected motion vectors, so that the image within the patch
208
after the movement/deformation on the reference image
202
is synthesized as a predicted image.
The predicted image synthesized by the above-mentioned global motion compensation is termed the predicted image of GMC. The method of synthesizing this predicted image of GMC can also be a high-speed algorithm which is disclosed in “Japanese Patent Application No. 8-60572 filed on Mar. 18, 1996” (the equivalent filed as application Ser. No. 08/819,628, on Mar. 17, 1997, now U.S. Pat. No. 6,008,852).
The methods of synthesizing the predicted image in the local motion compensation and global motion compensation are well known, and here the supplementary explanation will be made on the methods. As the above block or patch is moved/deformed on the basis of the motion vector or vectors detected by the motion estimation, the pixels within the block or patch before the movement/deformation are transformed in their positions. Therefore, it is necessary to calculate the positions of the pixels within the block or patch after the movement/deformation. Here, the positions of the pixels after the movement will be estimated by bilinear transform as an example of the methods. The bilinear transform can be applied not only to the parallel movement in the motion compensation but also to the rotation and deformation. When th
Nakaya Yuichiro
Suzuki Yoshinori
Antonelli Terry Stout & Kraus LLP
Hitachi , Ltd.
Rao Andy
LandOfFree
Method and synthesizing a predicted image, video coding... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Method and synthesizing a predicted image, video coding..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and synthesizing a predicted image, video coding... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2452713