High quality texture reconstruction from multiple scans

Computer graphics processing and selective visual display system – Computer graphics processing – Attributes

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C345S581000, C345S586000, C345S606000, C345S629000, C382S294000, C382S295000

Reexamination Certificate

active

06750873

ABSTRACT:

FIELD OF THE INVENTION
This invention relates generally to methods and apparatus for generating displayable digital models of physical objects, and in particular relates to such methods and apparatus that operate based on object surface scan and image data.
BACKGROUND OF THE INVENTION
The creation of three-dimensional digital content by scanning real objects has become common practice in graphics applications for which visual quality is paramount, such as animation, e-commerce, and virtual museums. While a significant amount of attention has been devoted to the problem of accurately capturing the geometry of scanned objects, the acquisition of high-quality textures is equally important, but not as widely studied.
Three-dimensional scanners are used increasingly to capture digital models of objects for animation, virtual reality, and e-commerce applications for which the central concerns are efficient representation for interactivity and high visual quality.
Most high-end 3D scanners sample the surface of the target object at a very high resolution. Hence, models created from the scanned data are often over-tesselated, and require significant simplification before they can be used for visualization or modeling. Texture data is often acquired together with the geometry, however a typical system merely captures a collection of images containing the particular lighting conditions at the time of scanning. When these images are stitched together, discontinuity artifacts are usually visible. Moreover, it is rather difficult to simulate various lighting conditions realistically, or to immerse the model in a new environment.
A variety of techniques can be used to capture digital models of physical objects, including CAT scans and structure from motion applied to video sequences. The following description has been restricted for convenience to techniques involving instruments that capture range images (in which each pixel value represents depth) and intensity images (in which each pixel is proportional to the incident light). A detailed summary of such methods can be found in G. Roth, “Building models from sensor data:an application shared by the computer vision and computer graphics community”, In Proc. of the NATO Workshop on the Confluence of Computer Vision and Computer Graphics, 2000.
The basic operations necessary to create a digital model from a series of captured images are illustrated in FIG.
1
. After outliers are removed from the range images, they are in the form of individual height-field meshes. Step A is to align these meshes into a single global coordinate system. In high-end systems registration may be performed by accurate tracking. For instance, the scanner may be attached to a coordinate measurement machine that tracks its position and orientation with a high degree of accuracy. In less expensive systems an initial registration is found by scanning on a turntable, manual alignment, or approximate feature matching. The alignment is then refined automatically using techniques such as the Iterative Closest Point (ICP) algorithm of Besl and McKay.
After registration, scans do not form a single surface, but interpenetrate one another, due to acquisition errors primarily along the line-of-sight in each scan. To form a single surface, in step B the overlapping scans must be averaged. In stitching/zippering methods this averaging is performed between pairs of overlapping meshes. In volumetric/occupancy grid methods line-of-sight errors are averaged by letting all scanned points contribute to a function of surface probability defined on a single volume grid. An advantage of volumetric methods is that all scans representing a surface point influence the final result, rather than simply a pair of scans.
In step B the scans are integrated into a single mesh. The integration may be performed by zippering/stitching, isosurface extraction from volumes, or interpolating mesh algorithms applied to error-corrected points.
To use a texture map with the integrated mesh, in step C the surface is parameterized with respect to a 2D coordinate system and texture coordinates are interpolated between mesh vertices. A simple parameterization is to treat each triangle separately and to pack all of the individual texture maps into a larger texture image. However, the use of mip-mapping in this case is limited since adjacent pixels in the texture may not correspond to adjacent points on the geometry. Another approach is to locate patches of geometry which are height fields that can be parameterized by projecting the patch onto a plane. Stitching methods use this approach by simply considering sections of the scanned height fields as patches.
Other methods could be built on tiling methods developed for multiresolution analysis or interactive texture mapping.
Parallel to acquiring the geometry of the model, intensity images are captured to obtain information about the reflectance of the surface. Such images may be recorded with electronic or traditional cameras, or by using polychromatic laser technology. In step D, these images are aligned to the corresponding geometry. In some cases the image acquisition is decoupled from the geometry acquisition. The camera intrinsic and extrinsic parameters for the images are estimated by manual or automatic feature matching. The advantage is that acquisition modalities that cannot capture surface reflectance can be used for capturing geometry.
In most cases, however, the alignment is performed by calibration. Geometry and intensity are captured simultaneously from scanners with a measured transformation between sensing devices. The resolution of the intensity image may be the same as that of the range image or even higher. When the resolution is the same, texture mapping is unnecessary since a color can be assigned to each vertex. Nevertheless, such a representation is inefficient, and geometric simplification is typically performed before the surface parameterization step.
The main benefit of obtaining intensity and range images simultaneously is that the intensity information can be used in the registration process in step A. Various approaches have been developed to use intensity images in registration. For example, it is known to use color as an additional coordinate in the ICP optimization. This avoids local minima in the solution in areas that have no geometric features, but have significant variations in the intensity. For models with pronounced geometric and intensity features, the method has proven to be very effective. A drawback is having to combine position and color data with different ranges and error characteristics. For subtle feature variations, these can cause one type of data to erroneously overwhelm the other.
It is also known to use intensity images to avoid the spatial search required by ICP. Intensity and intensity gradient images from approximately aligned scans are transformed into a common camera view. Locations of corresponding points on overlapping scans are inferred based on the difference between intensity values at a given pixel and the gradient at that pixel. This method works well only if the spatial variation of the gradient is small relative to errors in the alignment of the scans.
It is also known to present a non-ICP method for using intensity images to refine an initial manual alignment. In this approach pairs of range images are aligned manually by marking three points on overlapping intensity images. The locations of the matching points are refined by searching their immediate neighborhoods with image cross-correlation. A least-squares optimization follows to determine a general 3D transformation that minimizes the distances between the point pairs. Image registration techniques are also used for image mosaics in which only rotations or translations are considered.
After the intensity images are aligned to the geometry, illumination invariant maps are computed to estimate the surface reflectance (step E). The number of scans versus the number of intensity images, as well as the resolution of the scans compared to the r

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

High quality texture reconstruction from multiple scans does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with High quality texture reconstruction from multiple scans, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and High quality texture reconstruction from multiple scans will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3334290

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.