Methods and apparatus for constructing a 3D model of a scene...

Image analysis – Applications – 3-d or stereo imaging analysis

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C382S270000, C382S285000, C345S424000

Reexamination Certificate

active

06373977

ABSTRACT:

FIELD OF THE INVENTION
This invention relates to methods for rendering new views of a scene from a set of input images of the scene and, more particularly, to an improved voxel coloring technique which utilizes an adaptive coloring threshold.
BACKGROUND OF THE INVENTION
Currently there is a great deal of interest in image-based rendering techniques. These methods draw from the fields of computer graphics, computer vision, image processing and photogrammetry. The goal of these methods is to compute new views from one or more images of a scene, be they natural or synthetic. Several images of a scene are acquired from different camera viewpoints. The image data is used to compute one or more images of the scene from viewpoints that are different from the camera viewpoints. These techniques may be referred to as “new view synthesis”. A number of new view synthesis techniques have been disclosed in the prior art.
One new view synthesis technique, called “voxel coloring”, is disclosed by S. Seitz et al. in “Photorealistic Scene Reconstruction by Voxel Coloring”,
Proceedings Computer Vision and Pattern Recognition Conf
., pp. 1067-1073, 1997. The voxel coloring method requires that the pose of the input images be known. This means that the location and orientation of each camera are known, which allows points in the scene to be projected into the images. Thus, for any point in the scene, it is possible to calculate corresponding points in the images.
Voxel coloring involves two steps. First, a three-dimensional model of the scene is built in a step called reconstruction. The model, also called a reconstruction, is composed of points called voxels (short for volume elements). A voxel can be transparent, in which case it represents an empty part of the scene, or it can be opaque and have a color, in which case it represents part of an object in the scene. In the second step, the three-dimensional model is rendered to create the new image.
To reconstruct a scene, the user first specifies the volume of discrete voxels that includes the scene of interest. The algorithm scans the volume one voxel at a time. The voxels are colored as follows. If a voxel projects into approximately the same color in all images, it is marked as opaque and is given the color of its projections. Otherwise, the voxel is left transparent. Specifically, a voxel is colored if the standard deviation of the colors of all the pixels in all the projections is less than a constant, called the coloring threshold. Physically, a voxel that is marked as opaque and is colored represents the surface of an object in a scene, whereas a transparent voxel represents an empty part of the scene.
The voxel coloring algorithm also deals with occlusions. A voxel is said to be occluded if the view of the voxel from a particular camera is blocked by another voxel that has been colored. The voxel coloring algorithm manages occlusion relationships by maintaining an occlusion bitmap for each image and by scanning away from the cameras. When a voxel is colored, occlusion bits are set for the pixels in the projections of the voxel. Rays from such pixels are blocked by the newly colored voxel and therefore do not reach the voxels that remain to be scanned. Consequently, during the remainder of the reconstruction, the algorithm ignores pixels that have become occluded.
The voxel coloring algorithm described above encounters a problem where a surface has a large, abrupt color variation, and at the edge of an object in the scene. The voxel in the reconstruction projects on the corresponding pixels with a high color standard deviation. A high color standard deviation occurs at the edge of an object, because some pixels in the projection of the voxel fall within the object and other pixels fall outside the object. The high color standard deviation is likely to exceed the coloring threshold, and the voxel is not colored for any reasonable coloring threshold. A threshold high enough to allow the edge to be colored ignores most detail elsewhere and results in a very distorted reconstruction. Worse, when the voxel is not colored, the occlusion bitmaps are not set, so no voxels can be colored further along the rays from the cameras through the voxel. Thus, errors propagate.
Accordingly, there is a need for improved methods and apparatus for reconstructing a three-dimensional model of a scene using voxel coloring, wherein one or more of the above drawbacks are overcome.
SUMMARY OF THE INVENTION
According to an aspect of the invention, methods and apparatus are provided for reconstructing a three-dimensional model of a scene from a plurality of images of the scene taken from different viewpoints. The method includes the steps of defining a set of voxels that include the scene, and processing the voxels in the set of voxels beginning with voxels that are closest to the viewpoints and progressing away from the viewpoints. The processing of each voxel proceeds as follows. The voxel is projected onto a set of pixels in each of the images. A first color variation of not-occluded pixels in the sets of pixels is determined across all images. In addition, a second color variation of not-occluded pixels if determined across the set of pixels for each individual image, and a mean of the second color variations is determined across all images. A coloring threshold that is a function of the mean is established. If the first color variation across all images is less than the coloring threshold, the voxel is colored. Otherwise, the voxel is left transparent. Thus, the coloring threshold is established adaptively and depends on the color variation across the set of pixels for each individual image.
The step of determining a first color variation may comprise determining a color standard deviation across the plurality of images. The step of determining a second color variation may comprise determining a color standard deviation for each individual image. The coloring threshold may be a linear function of the mean of the second color variations.
The processing of each voxel may further comprise setting bits in an occlusion bitmap corresponding to the sets of pixels in the plurality of images when the voxel is colored.
A color mean of the sets of pixels in the plurality of images may be determined. The step of coloring the voxel may comprise setting the voxel color to the color mean.


REFERENCES:
patent: 5761385 (1998-06-01), Quinn
patent: 6215892 (2001-04-01), Douglass et al.
patent: 6243098 (2001-06-01), Lauer et al.
Stytz et al, “Three-Dimensional Medical Imaging: Algorithms and Computer Systems”, Dec. 1991, ACM Computing Surveys, vol. 23, No. 4, pp. 421-499.*
Wittenbrink et al, “Opacity-Weighted Color Interpolation for Volume Sampling”, IEEE paper on volume visualization, Oct. 1998, pp. 135-142.*
K.Kutulakos et al, “A Theory of Shape by space Carving”, Univ. of Rochester CS Technical Report #692, May 1998, pp. 1-27.
S. Seitz et al, “Photorealistic Scene Reconstruction by Voxel Coloring”, In Proc. Computer Vision and Pattern Recognition Conf., pp. 1067-1073, 1997.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Methods and apparatus for constructing a 3D model of a scene... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Methods and apparatus for constructing a 3D model of a scene..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Methods and apparatus for constructing a 3D model of a scene... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2906641

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.