Image analysis – Image transformation or preprocessing – Mapping 2-d image onto a 3-d surface
Reexamination Certificate
1999-07-01
2002-08-13
Couso, Yon J. (Department: 2623)
Image analysis
Image transformation or preprocessing
Mapping 2-d image onto a 3-d surface
C345S419000
Reexamination Certificate
active
06434277
ABSTRACT:
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to image processing apparatuses and methods, and media therefor, and in particular, to an image processing apparatus and method that easily implements processing such as three-dimensional editing on a two-dimensionally displayed three-dimensional object, and a medium therefor.
2. Description of the Related Art
Various methods have been proposed that implement various processes on a two-dimensional image and extract information necessary for the processes from the two-dimensional image. Documents describing the methods include James D. Foley, Andries van Dam, Steven K. Feiner, and John F. Hughes, “Computer Graphics, principles and practice”, ADDISON-WESLEY PUBLISHING COMPANY, 1996 (hereinafter referred to as “Document 1”), Paul E. Debevec, Camillo J. Taylor, and Jitendra Malik, “Modeling and Rendering Architecture from Photographs: A hybrid geometry-and-image-based approach”, proceedings of SIGGRAPH 96, pp. 11-20 (hereinafter referred to as “Document 2”), Oliver Faugeras, “Three-dimensional computer version”, The MIT press (hereinafter referred to as “Document 3”), Kenneth P. Fishin, and Brian A. Barsky, “Family of New Algorithms for Soft Filling”, proceedings of SIGGRAPH 84, pp. 235-244 (hereinafter referred to as “Document 4”), Pat Hanrahan and Paul Haeberli, “Direct WYSIWYG Painting and Texuturing on 3D Shapes”, proceedings of SIGGRAPH 90, pp. 215-233 (hereinafter referred to as “Document 5”), Youichi Horry, Ken-ichi Anjyo, and Kiyoshi Arai, “Tour Into the Picture: Using a Spidery Mesh Interface to Make Animation from a Single Image”, proceedings of SIGGRAPH 97, pp. 225-232 (hereinafter referred to as “Document 6”), and Michael Gleicher, “Image Snapping”, proceedings of SIGGRAPH 95, pp. 183-190 (hereinafter referred to as “Document 7”).
In Document 1, image processing called “two-dimensional paint” is described in which a computer is used to perform processing in the same way as a designer or the like draws a picture on paper using a paintbrush or airbrushing (a technique that draws a picture by spraying paints on paper).
In this type of conventional two-dimensional paint, even when a three-dimensional object is displayed in an image, the image itself is treated on a two-dimensional plane. Accordingly, when characters are rendered irrespective of the direction of the three-dimensional object in a three-dimensional space which is displayed on the two-dimensional plane, or a figure is added, the image looks unnatural.
For example, in the case where a house-shaped three-dimensional object, as shown in
FIG. 1A
, is displayed, and characters are rendered on walls of the house without the direction of the characters, the characters do not look as if they are written on the walls, as shown in FIG.
1
B. For adding a parallelepiped room to a wall of the house, in the case where a rectangle is rendered on the wall without the direction of the rectangle, the image looks unnatural, as shown in FIG.
1
C. In the case where a cylinder is displayed in a two-dimensional image as shown in
FIG. 2A
, when characters are rendered on the side surface without ignoring the curvature of the side surface, the characters do not look as if they are written on the side surface, as shown in FIG.
2
B.
Accordingly, in order that an image may not look unnatural in two-dimensional paint, it is required that a character or figure be rendered being transformed so as to match the direction of a three-dimensional object displayed in the two-dimensional image. Performing operations for the rendering requires a degree of experience.
Therefore, there is a method in which a user uses a ten-key pad or graphical user interface (GUI) to input an angle of inclination of the three-dimensional object displayed on the two-dimensional image so that a computer uses the input to transform a new character or figure to be rendered. In this method, the user must adjust the angle of inclination of the three-dimensional object to be input to the computer while viewing the rendered character or figure so that the rendered character or figure does not look unnatural. The adjustment also requires a degree of experience.
As described above, when the user instructs the rendering of a character or figure ignoring the direction of a three-dimensional object displayed on a two-dimensional image, the computer cannot display the character or figure as it looks natural, in other words, an image looking as if the character or figure was originally positioned cannot be obtained. This is due to lack of information on the position of the three-dimensional object displayed in the two-dimensional image and information (a position at which a landscape or the like was observed in the case where the two-dimensional image was obtained by performing image capture on a picture) on the position of image capture for the two-dimensional image.
Accordingly, there is a method that uses computer vision to find, from a two-dimensional image, the position of a three-dimensional object in a three-dimensional space displayed in the two-dimensional image, and the image capture position of the three-dimensional object.
In other words, in Document 2, a method is disclosed in which a three-dimensional shape such as a parallelepiped is correlated using a GUI with a structure in a plurality of photographs, and the size of the structure and the photographing position are found. In Document 3, other various methods for finding the position of a three-dimensional object and an image capture position on the object are disclosed.
In the above-described methods using the computer vision to compute a three-dimensional object position and an image capture position on the object, the principles of triangulation are used. This requires a plurality of images obtained by performing image capture on the same three-dimensional object from plurality of image capture positions. However, when two-dimensional paint is performed, the images are not always prepared, and when the two-dimensional images are photographs of a picture, the plurality of images is not used in general.
If the plurality of images obtained by performing image capture on the same three-dimensional object from the plurality of image capture positions can be prepared, the corresponding positions (e.g., vertices of the roof, etc., of a structure as a three-dimensional object) of the same three-dimensional object displayed on the images must be designated in order for the computer to compute the position of the three-dimensional object and the image capture position. The operation of designation for the images is complicated, and requires a time. In addition, in the case where based on the position (in the three-dimensional space) of the three-dimensional object displayed in the two-dimensional image and the image capture position, three-dimensionally natural rendering is performed, three-dimensional data, such as three-dimensional coordinates generated when the three-dimensional object is viewed from the image capture position, must be processed, which requires a great amount of operations for the processing.
In Document 4, a method for color processing for two-dimensional paint is disclosed.
In two-dimensional paint, the user uses a GUI to select a color for rendering and to perform rendering using the selected color. The color and brightness of a three-dimensional object displayed in the two-dimensional image vary depending on the positional relationship between the direction of the object surface and illumination. For example, painting an entire surface of the three-dimensional object in the same color (the same RGB levels) causes an unnatural rendering result. Accordingly, for obtaining a natural rendering result, a painting color must be gradationally changed considering the positional relationship between the direction of a surface to be painted and illumination. In particular, in the case where a surface to be painted is curved, pixel levels needs to be sequentially changed so that the rendering result is natural. Thus, a p
Ohki Mitsuharu
Totsuka Takashi
Yamada Rui
Bell Boyd & Lloyd LLC
Couso Yon J.
LandOfFree
Image processing apparatus and method, and medium therefor does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Image processing apparatus and method, and medium therefor, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Image processing apparatus and method, and medium therefor will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2960278