Depth painting for 3-D rendering applications

Computer graphics processing and selective visual display system – Computer graphics processing – Three-dimension

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

Reexamination Certificate

active

06417850

ABSTRACT:

BACKGROUND OF THE INVENTION
In the video and film industry, the animation process can take any number of various forms: conventional cel animation, the use of three dimensional (“3-D”) models and effects and image-based rendering. Cel animation is the traditional method for generating animation by painting a foreground object on a sheet of clear celluloid (cel) and placing the cel on a background. The animation is generated by producing a series of cels with the foreground object moved to a new position in each cel. A 3-D model of an object is a representation that contains geometry information and surface attributes for a 3-D object in order to display the 3-D object on a 2-D display. The geometry information in the representation typically defines the surfaces of the 3-D object as a list of flat polygons sharing common sides and vertices. The surface attributes in the representation specify the texture and color to apply to the 3-D object. Both the generation of cels for cel animation and the generation of 3-D models are laborious and time consuming.
Image-based rendering techniques use 2-D images for visualizing in 3-D as well as editing and manipulating 3-D static scenes. The 3-D effects are generated through a rendering of a novel view of the 3-D static scene from another viewpoint by moving a virtual camera position. An image-based rendering technique for generating 3-D effects from a set of two-dimensional (“2-D”) images of a 3-D static scene is described in Greene et al. “Creating raster Omnimax images from multiple perspective views using the Elliptical Weighted Average filter”, IEEE Computer Graphics and Applications. pages 21-27, August 1995. An image based rendering technique such as described in Greene et al. requires a set of 2-D images from multiple perspective views of the 3-D static scene. The set of 2-D images is required to provide the 3-D geometric information to render novel views of the 3-D static scene.
Current image-based rendering techniques such as the technique described in Greene et al. may be used only when there is a set of 2-D images of the 3-D static scene available, to provide 3-D geometric information. However, there may be only a single 2-D image of a 3-D static scene, for example, a painting or a photograph or the single 2-D image may be an abstract painting with no depth. A technique to generate 3-D effects for a single 2-D image of a 3-D static scene is described in “Tour into the picture: Using a spidery mesh interface to make animation from a single image”, by Horry et al. in Computer Graphics (SIGGRAPH'95), pages 229-238, August 1995. Horry et al. provides a user interface which allows the user to create 3-D effects from a single 2-D image. After the user has selected a vanishing point and distinguished foreground objects from background objects in the single 2-D image, a 3-D model is generated through the use of simple polygons and planar regions. Novel views of the single 2-D image are generated using the generated 3-D model of the single 2-D image.
SUMMARY OF THE INVENTION
An image taken from a new point of view may be rendered from an original two-dimensional image without a 3-D model using the present invention. Multiple-layers are defined by assigning depth to pixels in the original image, and pixels with depth are added to portions of layers covered by a foreground layer in the original image. Image locations of pixels from the original image and added pixels, which are reprojected to a new point of view, are calculated from the new point of view and dimensional information of the original image, including the assigned depth. The reprojected pixels are displayed at the calculated locations.
On a preferred and novel user interface, the original two-dimensional image and images from new points of view are simultaneously displayed during the rendering process. The original two-dimensional image may be centered within the images calculated for points of view left, right, up and down relative to the point of view of the original two-dimensional image.
Depth may be assigned to pixels in the two dimensional original image by designating regions and assigning depth to pixels within each region as a class. The depths of the pixels within a region may be assigned as a function of pixel position within the region. For example, the depth of a pixel may be proportional to its distance from a region boundary. The depths of pixels within a region may also be assigned as a function of pixel brightness in the region.
Depth may be assigned to a region such that the region is rotated relative to the original image. The resultant gaps between boundaries as viewed from new points of view may be filled by recomputing depths of pixels in adjacent regions to create smooth depth transitions between the region boundaries.


REFERENCES:
patent: 5801717 (1998-09-01), Engstrom et al.
patent: 5914721 (1999-06-01), Lim
patent: 6005967 (1999-12-01), Nakagawa et al.
patent: 6057847 (2000-05-01), Jenkins
patent: 6111582 (2000-08-01), Jenkins
Heeger, D.J., and Bergen, J.R., “Pyramid-Based Texture Analysis/Synthesis,”Computer Graphics, pp. 229-238 (Aug. 1995).
Horry, Y., et al., “Tour into the Picture: Using a Spidery Mesh Interface to Make Animation from a Single Image,”Computer Graphics(SIGGRAPH '97), pp. 225-232 (Aug. 1997).
McMillan, L., and Bisop, G., “Plenoptic Modeling: An Image-Based Rendering System,”Computer Graphics(SIGGRAPH '95), pp. 39-46 (Aug. 1995).
Mortensen, E.N., and Barrett, W.A., “Intelligent Scissors for Image Composition,”,Computer Graphics(SIGGRAPH '95), pp. 191-198 (Aug. 1995).
Shade, J., et al., “Layered Depth Images,”Computer Graphics(SIGGRAPH '98), pp. 231-242 (Jul. 1998).
Taylor,C.J.,et al., “Reconstructing Polyhedral Models of Architectural Scenes from Photographs,”,In Fourth European Conference on Computer Vision(ECCV '96), vol. 2, pp. 659-668 (Apr. 1996).
Wang, J.Y.A., and Adelson,E.H., “Representing Moving Images with Layers,”IEEE Transactions on Image Processing, vol. 3, No. 5, pp. 625-638 (Sep. 1994).
Witkin, A.,and Kass,M., “Reaction-Diffusion Textures,”,Computer Graphics(SIGGRAPH '91), vol. 25 No. 4, pp. 299-308 (Jul. 1991).
McMillan, L., and Bishop,G., “Head-Tracked Stereoscopic Display using Image Warping,”, 1995IS&T/SPIE Symposium on Electronic Imaging Science and Technology, SPIE Proc.s #2409A, San Jose, CA., 10 pages (Feb. 5-10, 1995).
DeBonet, J.S., “Multiresolution Sampling Procedure for Analysis and Synthesis of Texture Images,”Computer Graphics(SIGGRAPH '97), pp. 361-368 (Aug. 1997).
Gortler,S.J., et al., “Rendering Layered Depth Images,” Technical Report MSTR-TR-97-09, Microsoft Research, Microsoft Corp.,pp. 1-10 (Mar. 1997).
Greene, N.,and Heckbert, P., “Creating Raster Omnimax Images from Multiple Perspective Views using the Elliptical Weighted Average Filter,”IEEE Computer Graphics and Applications, pp. 21-27 (Jun. 1986).

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Depth painting for 3-D rendering applications does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Depth painting for 3-D rendering applications, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Depth painting for 3-D rendering applications will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2836194

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.