Modeling 3D objects with opacity hulls

Computer graphics processing and selective visual display system – Computer graphics processing – Three-dimension

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C345S426000, C382S154000

Reexamination Certificate

active

06791542

ABSTRACT:

FIELD OF THE INVENTION
The invention relates generally to computer graphics, and more particularly to acquiring images of three-dimensional physical objects to generate 3D computer graphics models that can be rendered in realistic scenes using the acquired images.
BACKGROUND OF THE INVENTION
Three-dimensional computer graphics models are used in many computer graphics applications. Generating 3D models manually is time consuming, and causes a bottleneck for many practical applications. Besides the difficulty of modeling complex shapes, it is often impossible to replicate the geometry and appearance of complex objects using prior art parametric reflectance models.
Not surprisingly, systems for generating 3D models automatically by scanning or imaging physical objects have greatly increased in significance. An ideal system would acquire the shape and appearance of an object automatically, and construct a detailed 3D model that can be placed in an arbitrary realistic scene with arbitrary novel illumination.
Although there has been much recent work towards this goal, no system to date fulfills all of these requirements. Many systems, including most commercial systems, focus on capturing accurate shape, but neglect to acquire an accurate appearance. Other methods capture reflectance properties of 3D objects and fit these properties to parametric bi-directional reflectance distribution functions (BRDFs). However, those methods fail for complex anisotropic BRDFs and do not model important appearance effects, such as inter-reflections, self-shadowing, translucency, sub-surface light scattering, or refraction.
There have also been a number of image-based methods for acquiring and representing complex objects. But they either lack a 3D shape model, assume accurate 3D geometry, do not allow rendering the objects under novel arbitrary illumination, or are restricted to a single viewpoint. All of these systems require substantial manual involvement.
There are many methods for acquiring high-quality 3D shape from physical objects, including contact digitizers, passive stereo depth-extraction, and active light imaging systems. Passive digitizers are inadequate where the object being digitized does not have sufficient texture. Nearly all passive methods assume that the BRDF is Lambertian, or does not vary across the surface.
Magda et al., in “Beyond Lambert: Re-constructing Surfaces with Arbitrary BRDFs.”
Proc. of IEEE International Conference on Computer Vision ICCV
, 2001, described a stereopsis method that uses the Helmholtz reciprocity to extract depth maps from objects with arbitrary BRDFs. However, their method is not robust for smooth objects. In addition, their method does not take inter-reflections and self-shadowing into account.
Active light systems, such as laser range scanners, are very popular and have been employed to acquire large models in the field, see Levoy et al. “The Digital Michelangelo Project: 3D Scanning of Large Statues,”
Computer Graphics
, SIGGRAPH 2000 Proceedings, pp. 131-144, 2000, and Rushmeier et al. “Acquiring Input for Rendering at Appropriate Levels of Detail: Digitizing a Piet'a,”
Proceedings of the
9
th Eurographics Workshop on Rendering
, pp. 81-92, 1998.
Active light systems often require a registration step to align separately acquired scanned meshes, see Curless et al., “A Volumetric Method for Building Complex Models from Range Images,”
Computer Graphics
, SIGGRAPH 96 Proceedings, pp. 303-312, 1996, and Turk et al., “Zippered Polygon Meshes from Range Images,”
Computer Graphics
, SIGGRAPH 94 Proceedings, pp. 311-318, 1994. Alternatively, the scanned geometry is aligned with separately acquired texture images, see Bernardini et al., “High-Quality Texture Reconstruction from Multiple Scans,”
IEEE Trans. on Vis. and Comp. Graph
., 7(4):318-332, 2001.
Often, filling of gaps due to missing data is necessary as well. Systems have been constructed where multiple lasers are used to acquire a surface color estimate along lines-of-sight of the imaging system. However, those systems are not useful for capturing objects under realistic illumination. All active light systems place restrictions on the types of materials that can be scanned, as described in detail by Hawkins et al., in “A Photometric Approach to Digitizing Cultural Artifacts,” 2
nd International Symposium on Virtual Reality, Archaeology, and Cultural Heritage,
2001.
To render objects constructed of arbitrary materials, image-based rendering can be used. Image-based representations have the advantage of capturing and representing an object regardless of the complexity of its geometry and appearance. Prior art image-based methods allowed for navigation within a scene using correspondence information, see Chen et al., “View Interpolation for Image Synthesis,”
Computer Graphics
,” SIGGRAPH 93 Proceedings, pp. 279-288, 1993, and McMillan et al., “Plenoptic Modeling: An Image-Based Rendering System,”
Computer Graphics
, SIGGRAPH 95 Proceedings, pp. 39-46, 1995. Because this method does not construct a model of the 3D object, it is severely limited.
Light field methods achieve similar results without geometric information, but with an increased number of images, see Gortler et al, “The Lumigraph,”
Computer Graphics
, SIGGRAPH 96 Proceedings, pp. 43-54, 1996, and Levoy et al., “Light Field Rendering,”
Computer Graphics
, SIGGRAPH 96 Proceedings, pp. 31-42, 1996. The best of those methods, as described by Gortler et al., include a visual hull of the object for improved ray interpolation. However, those methods use static illumination, and cannot accurately render objects into novel arbitrary realistic scenes.
An intermediate between purely model-based and purely image-based methods uses view-dependent texture mapping, see Debevec et al., “Modeling and Rendering Architecture from Photographs: A Hybrid Geometry-and Image-Based Approach,”
Computer Graphics
, SIGGRAPH 96 Proceedings, pp. 11-20, 1996, Debevec et al., “Efficient View-Dependent Image-Based Rendering with Projective Texture-Mapping,”
Proceedings of the
9
th Eurographics Workshop on Rendering
, pp. 105-116, 1998, and Pulli et al., “View-Based Rendering: Visualizing Real Objects from Scanned Range and Color Data,”
Eurographics Rendering Workshop
1997, pp. 23-34, 1997. They combine simple geometry and sparse texture data to accurately interpolate between images. Those methods are effective despite their approximate 3D shapes, but they have limitations for highly specular surfaces due to the relatively small number of texture maps.
Surface light fields can be viewed as a more general and more efficient representation of view-dependent texture maps, see Nishino et al., “Eigen-Texture Method: Appearance Compression based on 3D Model,”
Proc. of Computer Vision and Pattern Recognition
, pp. 618-624, 1999, Miller et al., “Lazy Decompression of Surface Light Fields for Precomputed Global Illumination,”
Proceedings of the
9
th Eurographics Workshop on Rendering
, pp. 281-292, 1998, Nishino et al., “Appearance Compression and Synthesis based on 3D Model for Mixed Reality,”
Proceedings of IEEE ICCV '
99, pp. 38-45, 1999, Grzeszczuk, “Acquisition and Visualization of Surface Light Fields,”
Course Notes
, SIGGRAPH 2001, 2001, and Wood et al., “Surface Light Fields for 3D Photography,”
Computer Graphics
, SIGGRAPH 2000 Proceedings, pp. 287-296, 2000. Wood et al. store surface light field data on accurate high-density geometry, whereas Nishino et al. use a coarser triangular mesh for objects with low geometric complexity.
Surface light fields are capable of reproducing important global lighting effects, such as inter-reflections and self-shadowing. Images generated with a surface light field usually show the object under a fixed lighting condition. To overcome this limitation, inverse rendering methods estimate the surface BRDF from images and geometry of the object.
To achieve a compact BRDF representation, most methods fit a parametric reflection model to the image data, see Lensch et al., “Image-Based Reconstruction of Spatiall

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Modeling 3D objects with opacity hulls does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Modeling 3D objects with opacity hulls, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Modeling 3D objects with opacity hulls will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3201382

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.