Mesh node motion coding to enable object based...

Pulse or digital communications – Bandwidth reduction or expansion – Television or motion video signal

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C382S243000

Reexamination Certificate

active

06339618

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
This invention relates to coding of digital video signals using mesh or wireframe modeling. More particularly, the invention relates to a coding scheme that codes video data as a base layer of coded data and a second, supplementary layer of mesh node coded data. The mesh node coding permits decoders to apply enhanced functionalities to elements of the video image.
2. Related Art
Video coding techniques are known. Typically, they code video data at a first data rate down to a second, lower, data rate. Typically, such coding is necessary to transmit the video information through a channel, which may be a radio channel, a data link of a computer network, or a storage element such as an optical or magnetic memory. Video coding reduces the capacity requirements of channels and permits the video information to be reconstructed at a decoder for display or manipulation.
Different coding applications have different objectives. Some desire only to code and decode video data. Others, however, particularly those that code synthetic video data, desire to attach functionalities to the video. Functionalities may include: motion tracking of moving objects, temporal interpolation of objects, modification of video objects (such as warping an image upon a video object), manipulation of size, orientation or texture of objects in a scene. Often, such operations are needed to be performed on individual objects in a scene, some of which may be synthetic others of which are natural.
One proposed standard for video coding has been made in the MPEG-4 Video Verification Model Version 5.1, ISO/IEC JTC1/ISC29/WG11 N1469 Rev., December 1996(“MPEG-4, V.M. 5.1”). According to MPEG-4, V.M. 5.1, encoders identify “video objects” from a scene to be coded. Individual frames of the video object are coded as “video object planes” or VOPs. The spatial area of each VOP is organized into blocks or macroblocks of data, which typically are 8 pixel by 8 pixel (blocks) or 16 pixel by 16 pixel (macroblocks) rectangular areas. A macroblock typically is a grouping of four blocks. For simplicity, reference herein is made to blocks and “block based coding” but it should be understood that such discussion applies equally to macroblocks and macroblock based coding. Image data of the blocks are coded by an encoder, transmitted through a channel and decoded by a decoder.
Under MPEG4, V.M. 5.1 coding, block data of most VOPs are not coded individually. Shown in
FIG. 1A
, image data of a block from one VOP may be used as a basis for predicting the image data of a block in another VOP. Coding first begins by coding an initial VOP, an “I-VOP”, without prediction. However, the I-VOP data may be used to predict data of a second VOP, a “P-VOP”. Blocks of the second VOP are coded based on differences between the actual data and the predicted data from blocks of the I-VOP. Finally, image data of a third type of VOP may be predicted from two previously coded VOPs. The third VOP is a “bidirectional VOP” or B-VOP. As is known, the B-VOP typically is coded after the I-VOP and P-VOP are coded. However, the different types of VOPs may be (and typically are) coded in an order that is different than the order in which they are displayed. Thus, as shown in
FIG. 1A
, the P-VOP is coded before the B-VOP even though it appeared after the B-VOP. Other B-VOPs may appear between the I-VOP and the P-VOP.
Where prediction is performed (P-VOP and B-VOP), image data of blocks are coded as, motion vectors and residual texture information. Blocks may be thought to “move” from frame to frame (VOP to VOP). Thus, MPEG-4 codes motion vectors for each block. The motion vector, in effect, tells a decoder to predict the image data of a current block by moving image data of blocks from one or move previously coded VOPs to the current block. However, because such prediction is imprecise, the encoder also transmits residual texture data representing changes that must be made to the predicted image data to generate accurate image data. Encoding of image data using block based motion vectors and texture data is known as “motion compensated transform encoding.”
Coding according to the MPEG-4 V.M. 5.1 is useful to code video data efficiently. Further, it provides for relatively simple decoding, permitting viewers to access coded video data with low-cost, low-complexity decoders. The coding proposal is limited, however, because it does not provide for functionalities to be attached to video objects.
As the MPEG-4, V.M. 5.1 coding standard evolved, a proposal was made to integrate functionalities. The proposed system, a single layer coding system, is shown in FIG.
1
B. There, video data is subject to two types of coding According to the proposal, texture information in VOPs is coded on a block basis according to motion compensated transform encoding. Motion vector information would be coded according to a different technique, mesh node motion encoding. Thus, encoded data output from an encoder
110
includes block based texture data and mesh node based motion vectors.
Mesh node modeling is a well known tool in the area of computer graphics for generating synthetic scenes. Mesh modeling maps artificial or real texture to wireframe models and may provide animation of such scenes by moving the nodes or node sets. Thus, in computer graphics, mesh node modeling represents and animates synthetic content. Mesh modeling also finds application when coding natural scenes, such as in computer vision applications. Natural image content is captured by a computer, broken down into individual components and coded via mesh modeling. As is known in the field of synthetic video, mesh modeling provides significant advantages in attaching functionalities to video objects. Details of known mesh node motion estimation and decoding can, be found in: Nakaya, et al., “Motion Compensation Based on Spatial Transformations,” IEEE Trans. Circuits and Systems for Video Technology, pp. 339-356, June 1994; Tekalp, et al., “Core experiment M2: Updated description,” ISO/IEC JTC1/SC29/WG11 MPEG96/1329, September 1996; and Tekalp, et al., “Revised syntax and results for CE M2(Triangular mesh-based coding),” ISO/IEC JTC1/SC29/WG11 MPEG96/1567, November 1996.
A multiplexer
120
at the encoder merges the data with other data necessary to provide for complete encoding such as administrative overhead data, possibly audio data or data from other video objects. The merged coded data is output to the channel
130
. A decoder includes a demultiplexer
140
and a VOP decoder
150
that inverts the coding process applied at the encoder. The texture data and motion vector data of a particular VOP are decoded by the decoder
150
and output to a compositor
160
. The compositor
160
assembles the decoded information with other data to form a video data stream for display.
By coding image motion according to mesh node notation, the single layer system of
FIG. 1B
permits decoders to apply functionalities to a decoded image. However, it also suffers from an important disadvantage: All decoders must decode mesh node motion vectors. Decoding of mesh node motion vectors is computationally more complex than decoding of block based motion vectors. The decoders of the system of
FIG. 1B
are more costly because they must meet higher computational requirements. Imposing such cost requirements is disfavored, particularly for general purpose coding protocols where functionalities are used in a limited number of coding applications.
Thus, there is a need in the art for a video coding protocol that permits functionalities to be attached to video objects. Further, there is a need for such a coding protocol that is inter-operable with simple decoders. Additionally, there is a need for such a coding protocol that provides coding for the functionalities in an efficient manner.
SUMMARY OF THE INVENTION
The disadvantages of the prior art are alleviated to a great extent by a method and apparatus for coding video data as base layer data and enhancement layer data. The base layer dat

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Mesh node motion coding to enable object based... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Mesh node motion coding to enable object based..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Mesh node motion coding to enable object based... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2844657

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.