Computer graphics processing and selective visual display system – Computer graphics processing – Three-dimension
Reexamination Certificate
1997-03-05
2001-07-31
Vo, Cliff N. (Department: 2772)
Computer graphics processing and selective visual display system
Computer graphics processing
Three-dimension
C345S419000
Reexamination Certificate
active
06268862
ABSTRACT:
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to an image processing method and apparatus for generating and displaying a virtual environment for virtual reality.
2. Related Background Art
As a conventional method of expressing a three-dimensional object and space, and presenting a view image from an arbitrary position and direction, the following methods are known:
(1) A three-dimensional object or space is expressed using shape model data such as polygon data, curved surface data, and the like, texture data representing the surface attribute or pattern, light source data, and the like, and the view image of the space from an arbitrary position and direction is drawn by the rendering technique of computer graphics.
(2) Upon creating a three-dimensional virtual environment using the conventional method (1), elements (coordinate transformation data, shape data, surface attribute data, illumination, and the like) that make up the virtual environment are expressed by a tree structure. That is, a space, ground, architecture, room, furniture, illumination, ornament, and the like that make up the three-dimensional space originally have a hierarchical nesting relationship thereamong. For example, an ornament on a table depends on the table arrangement so that it moves together with the table arrangement, and it is often convenient to arrange such ornament relative to the coordinate system of the table. For this reason, a data structure having hierarchical dependence on the arrangement is used. As a method of expressing such structure, a virtual environment is expressed by an n-ary tree structure.
For example,
FIG. 18
shows an illustration example of a certain simple virtual environment. In the case of this figure, paying attention to a room, table, and sofa, the room is described on a coordinate system C
2
transformed from a world coordinate system C
0
by a coordinate transformation T
2
, and the table and sofa in the room are respectively described on coordinate systems C
3
and C
4
transformed from the coordinate system C
2
by coordinate transformations T
3
and T
4
. A pot on the table is described on a coordinate system C
5
transformed from the coordinate system C
3
by a coordinate transformation T
5
. Furthermore, ray (or light) space data is arranged on the desk. This data is described on a coordinate system C
6
transformed from the coordinate system C
3
by a coordinate transformation T
6
as in the pot. When these objects are expressed by a typical tree structure, a tree shown in
FIG. 19
is obtained.
(3) The images of a three-dimensional object or space are taken in advance from a large number of viewpoints, an image taken under a phototaking condition close to a desired view position and direction is selected from the taken images, and a three-dimensional object viewed from the position and direction close to the view position and direction is displayed, thereby approximately expressing a view from an arbitrary position and direction.
(4) Ray space data is generated on the basis of the actually taken images of a three-dimensional object or space, and an image viewed from an arbitrary position and direction is generated and displayed on the basis of the ray space data, thereby reconstructing the three-dimensional object or space.
In this method, an object is expressed as a set of light components emanating from the object without calculating the shape of the object.
(5) A panorama image obtained by looking around from a given viewpoint is input, and an image corresponding to the view direction of the viewer is generated based on the panorama image (mainly attained by extracting a partial image from the panorama image and correcting distortion of the extracted image), thereby displaying a three-dimensional space from a given point.
However, the above-mentioned conventional methods (1) to (5) suffer the following problems.
It is difficult for the conventional method (1) to generate or reconstruct the shape data of an object having a very complicated shape. Furthermore, it is also difficult for the method (1) to acquire the shape data of an object with a complicated shape from a real object using a three-dimensional measurement apparatus. In particular, it is more difficult for the method (1) to reconstruct a real object having an existing complicated shape or a complicated surface pattern or reflection characteristics (absorption/transmission characteristics). Furthermore, the method (1) is generally easy to express an artificial object but is hard to express a natural object. However, this method has a merit, i.e., it can express an artificial, simple three-dimensional space such as a room, a row of stores and houses, or the like, which is mainly built of planes with a small data volume.
The conventional method (2) is an expression/description method of data, and suffers the problems of the conventional method (1). However, this expression/description method of data is an excellent one.
In the conventional method (3), the above-mentioned problems are not posed. However, since the images to be finally presented must be taken in advance, a very large number of images must be prepared and a huge data volume is required, so as to artificially obtain an arbitrary viewpoint position and direction. In view of the data volume and phototaking required for obtaining a large number of images, it is impossible to put this method into practical applications. For the same reason, it is nearly impossible to hold every data to express a wide three-dimensional space such as a room, a row of stores and houses, and the like. This method is suitable for expressing a three-dimensional object by taking the images of the object from its surrounding positions.
In the conventional method (4), a large number of images need not be taken in advance unlike in the conventional method (3). Once ray space data is generated based on a predetermined number of images taken in advance, a view image from an arbitrary viewpoint position can be generated (strictly speaking, there is a constraint condition). However, in order to present images from every position in a three-dimensional space, a huge volume of ray space data must also be generated and held. This method is also suitable for a three-dimensional object as in the conventional method (3), but is not suitable for expressing a three-dimensional space such as a room, a row of stores and houses, or the like.
The conventional method (5) is suitable for expressing a three-dimensional space such as a room, a row of stores and houses, or the like, and a view image in an arbitrary direction can be presented as long as the viewpoint position is limited. However, when the viewpoint position is to be arbitrarily moved, panorama images from a very large number of viewpoint positions must be prepared in advance to express an arbitrary movement by approximation. Accordingly, a very large number of panorama images must be prepared. For this reason, it is difficult in practice to attain viewing from an arbitrary viewpoint position owing to a huge data volume and difficulty in phototaking. Hence, this processing is normally realized by reducing the data volume and allowing only discrete movements of the viewpoint position.
Furthermore, the conventional methods (1) and (2), and the methods (3), (4), and (5) are fundamentally different techniques, and there is no method that can utilize their characteristics and combine these methods by effectively using only their merits.
SUMMARY OF THE INVENTION
It is an object of the present invention to provide an image processing method and apparatus, which can utilize the characteristics of the conventional methods (1) and (2), and methods (3), (4), and (5), which are originally different techniques, and can combine these methods to effectively take an advantage of only their merits.
In order to achieve the above object, an image processing method according to the present invention is an image processing method of generating and displaying a virtual environment, comprising:
the model sp
Katayama Akihiro
Shibata Masahiro
Uchiyama Shinji
Canon Kabushiki Kaisha
Vo Cliff N.
LandOfFree
Three dimensional virtual space generation by fusing images does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Three dimensional virtual space generation by fusing images, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Three dimensional virtual space generation by fusing images will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2481002