Scene description generating apparatus and method, object...

Image analysis – Image transformation or preprocessing – Measuring image properties

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C382S243000

Reexamination Certificate

active

06621939

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to scene description generating apparatuses and methods for placing static image signals, moving image signals, and graphic data in a screen and for describing a new scene, to object extracting methods, and to recording media.
2. Description of the Related Art
FIG. 19
shows conventional scene description technology for placing static image signals, moving image signals, and graphic data in a screen and for describing a new scene. When input images and graphic data are to be displayed as a scene combining one or more input data, it is necessary to provide additional information for designating what the constructed scene will be. This additional information is referred to as a scene description (information). The scene description (information) is used to place a part (referred to as an “object” to be input in a scene. Referring to
FIG. 19
, an object A
02
and an object A
03
are displayed based on a scene description (information) A
00
, thus obtaining a scene A
04
. Although the two-dimensional scene description is illustrated by way of example in
FIG. 19
, there are cases in which a three-dimensional scene is displayed on a two-dimensional display device by describing the three-dimensional scene and projecting the scene onto a two-dimensional plane. When a scene combining one or more objects is represented based on a scene description, an entire screen A
01
displaying an input static image or a moving image may be used. Alternatively, a desired portion of the scene may be separated as an object A
02
. This separation is referred to as segmentation.
FIG. 20
shows the structure of a conventional editing system for performing segmentation and generating a scene description. Image processing of an input image or graphic data is performed independently of generating the scene description. In an image processor B
00
, graphic data B
01
is transformed to an object B
04
by a segmentation unit B
02
. Segmentation may be performed by various methods including a chroma-key method for separating a background with a specific color component, a method for cutting the contour of an object based on the luminance level gradient, and a method for designating the contour by manual operation. A segmented object may be encoded by an encoder B
03
indicated by a dotted line using, for example, an encoding system conforming to the ISO14496-2 standard. In contrast, a scene description processor B
05
generates a scene description B
07
based on a designation of what the constructed scene will be.
There are various types of scene description, including the ISO14496-1 standard MPEG-4 scene description, virtual reality modeling language (VRML) conforming to the ISO14772-1 standard, hypertext markup language (HTML) widely used in the Internet, and multimedia and hypermedia information coding expert group (MHEG) conforming to the ISO13522-5 standard.
Referring to
FIGS. 21
to
23
, the ISO14496-1 standard MPEG-4 scene description is illustrated by way of example to describe the structure, the contents, and an example of a scene description.
FIG. 21
shows the structure of a scene description,
FIG. 22
shows the contents of a scene description, and
FIG. 23
shows an example of a scene. A scene description is represented by basic description units referred to as nodes. A node is a unit for describing an object, a light source, and an object's surface characteristics, and includes data referred to as a field for designating node characteristics and attributes. For example, referring to
FIG. 21
, a “Transform
2
D” node is a node capable of designating two-dimensional coordinate transformation, and includes a “translation” field shown in
FIG. 22
, designating placement, such as translation. There are fields that can designate other nodes. Hence, a scene description has a tree structure. When an object is to be placed in a scene, the scene description is grouped into a node representing the object and a node representing attributes, as shown in FIG.
22
. The scene description is further grouped into a node representing placement. The contents of the scene description shown in
FIG. 22
are described below. First, “Group{” is a grouping node of an entire scene, and “children” indicates the start of a description of a child node. The text “Transform
2
D” is a grouping node for designating coordinate transformation, and “translation x
1
y
1
” designates the placement position. The text “children[” indicates the start of a description of a child node to be placed, and “Shape{” designates incorporation of an object into the scene. The text “geometry Bitmap{}” indicates a scene object on which a texture image is to be displayed, “appearance Appearance{” designates a surface characteristic of the scene object, and “texture ImageTexture{url}” designates an image object used as a texture. In accordance with the contents of the scene description, an image object is placed as shown in FIG.
23
. An object indicated by the “Shape” node is designated by the parent node, i.e., the “Transform
2
D” node, to be translated.
FIG. 23
shows an example of this. Referring to
FIG. 23
, an object in an input image is segmented every rectangular region containing the object by the segmentation unit B
02
shown in FIG.
20
. The object B
04
is then placed in the scene based on a designation in the scene description B
07
generated by the scene description generator B
06
.
Next, an image object encoding system is described using ISO14496-2 standard MPEG-4 Video by way of example. Referring to
FIG. 24
, an elliptical object D
01
in an input image D
00
is segmented from a background object D
03
, and the object D
01
is encoded. When encoding the object D
01
, a region D
02
including the object D
01
is set. In MPEG-4 Video, a rectangular region is used. Outside the rectangular region is not encoded. Encoding is performed in small block units. Hereinafter a block is referred to as an encoding block. When an encoding block, such as an encoding block D
05
, does not include object data, the encoding block is required to encode only a flag representing “there is no data to be encoded”. When an encoding block, such as an encoding block D
06
, includes both an object region and a region without an object, the pixel level of the region outside the object can be set to an arbitrary value and thus encoded. This is because the form (contour) of the object D
01
is separately encoded, and data outside the object is ignored when decoding. In contrast, the background D
03
is also an object. When encoding the background object D
03
, a rectangular region D
04
including the object D
03
is set. This rectangular region D
04
covers an entire frame of the input image. The rectangular region D
04
is encoded in the same manner as the object D
01
. Specifically, a shaded portion indicates an object to be encoded. Here, the entire frame of the input image is included in the rectangular region D
04
. When an encoding block D
07
includes data inside and outside the object, outside the object can be set to an arbitrary value and thus encoded. When an encoding block D
08
does not include object data, only a flag representing “there is no data to be encoded” is encoded.
Referring to
FIG. 25
, when an image object, such as MPEG-4 Video, is placed in a scene, a placement position of the object in scene coordinates is designated. The placement position is described in a scene description. The placement position can be designated in two-dimensional coordinates or in three-dimensional coordinates. Alternatively, the placement position can be designated based on alignment constraints, such as “placing an object at the lower left of the screen”. In
FIG. 25
, the center of a rectangular region containing the object is used as a positional reference of the object. Alternatively, the centroid of the object or the upper left of the object can be used as the positional reference. Hence, the object is placed according

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Scene description generating apparatus and method, object... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Scene description generating apparatus and method, object..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Scene description generating apparatus and method, object... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3025452

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.