Hand-held 3D vision system

Television – Stereoscopic – Signal formatting

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C382S154000

Reexamination Certificate

active

06781618

ABSTRACT:

FIELD OF THE INVENTION
This invention relates generally to the field of three-dimensional virtual reality environments and models, and, more particularly, to building virtual reality world models from multiple-viewpoint real world images of scenes.
BACKGROUND OF THE INVENTION
In the field of computer graphics, there is a need to build realistic three-dimensional (3D) models and environments that can be used in virtual reality walk-throughs, animation, solid modeling, visualization, and multimedia. Virtual reality environments are increasingly available in a wide variety of applications such as marketing, education, simulation, entertainment, interior and architectural design, fashion design, games and the Internet to name but a few.
Many navigable virtual environments with embedded interactive models tend to be very simplistic due to the large amount of effort that is required to generate realistic-3D virtual models behaving in a realistic manner. Generating quality virtual reality scene requires sophisticated computer systems and a considerable amount of hand-tooling. The manual 3D reconstruction of real objects, by using CAD-tools is usually time consuming and costly.
The Massachusetts Institute of Technology, the University of Regina in Canada, and Apple Computer, Inc. jointly created the “
Virtual Museum Project
” which is a computer-based rendering of a museum which contains various objects of interest.
As the user moves through the virtual museum individual objects can be approached and viewed from a variety of perspectives.
Apple Computer also has developed the Quicktime VR™ system that allows a user to navigate within a virtual reality scene generated from digitized overlapping photographs or video images. However, warping can distort the images so that straight lines appeared curved, and it is not possible to place 3D models in the scene.
Three-dimensional digitizers are frequently used to generate models from real world objects. Considerations of resolution, repeatability, accuracy, reliability, speed, and ease of use, as well as overall system cost, are central to the construction of any digitizing system. Often, the design of a digitizing system involves a series of trade-offs between quality and performance.
Traditional 3D dimensional digitizers have focused on geometric quality measures for evaluating system performance. While such measures are objective, they are only indirectly related to an overall goal of a high quality rendition. In most 3D digitizer systems, the rendering quality is largely an indirect result of range accuracy in combination with a small number of photographs used for textures.
Prior art digitizers include contact digitizers, active structured-light range-imaging systems, and passive stereo depth-extraction. For a survey, see Besl “
Active Optical Range Imaging Sensors
,” Advances in Machine Vision, Springer-Verlag, pp. 1-63, 1989.
Laser triangulation and time-of-flight point digitizers are other popular active digitizing approaches. Laser ranging systems often require a separate position-registration step to align separately acquired scanned range images. Because active digitizers emit light onto the object being digitized, it is difficult to capture both texture and shape information simultaneously. This introduces the problem of registering the range images with textures.
In other systems, multiple narrow-band illuminates, e.g., red, green, and blue lasers, are used to acquire a surface color estimate along lines-of-sight. However, this is not useful for capturing objects in realistic illumination environments.
Passive digitizers can be based on single cameras or stereo cameras. Passive digitizers have the advantage that the same source images can be used to acquire both structure and texture, unless the object has insufficient texture.
Image-based rendering systems can also be used, see Nishino, K., Y. Sato, and K. Ikeuchi, “
Eigen
-
Texture Method: Appearance Compression based on
3
D Model
,” Proc. of Computer Vision and Pattern Recognition, 1:618-624, 1999, and Pulli, K., M. Cohen, T. Duchamp, H. Hoppe, L. Shapiro, and W. Stuetzle, “
View
-
based Rendering: Visualizing Real Objects from Scanned Range and Color Data
, ” Proceedings of the 8th Eurographics Workshop on Rendering, pp. 23-34, 1997. In these systems, images and geometry are acquired separately with no explicit consistency guarantees.
In image-based vision systems, there are two inherent and some what related problems. The first problem has to do with deducing the camera's intrinsic parameters. Explicit calibration of intrinsic parameters can be circumvented in specialized processes but is common in many existing systems. The second problem is concerned with estimating the camera's extrinsic parameters i.e., camera position/motion relative to the environment or relative to the object of interest. Estimating the camera positions is an essential preliminary step before the images can be assembled into a virtual environment.
The terms ‘camera position’ and ‘camera motion’ are used interchangeably herein, with the term ‘camera position’ emphasizing the location and the orientation of a camera, and the term ‘camera motion’ indicating a sequence of camera positions as obtained, for example, from a sequence of images.
The first problem, of calibrating a camera's intrinsic parameters, is well studied. Solutions for calibrating a single camera are too many to enumerate. Solutions for calibrating stereo cameras are also well known. There, the simple requirement is to have some overlap in the images acquired by the stereo cameras. Calibrating rigid multi-camera systems where there is no overlap of the viewed scene in the different cameras has, however, not been the subject of previous work.
In the prior art, the second problem, of estimating camera position, can be solved in a number of ways. For generating a 3D model of a portable object, one method rigidly fixes the cameras at known locations, and rotates the object on a turntable through precise angular intervals while taking a sequence of images. Great care must taken in setting up and maintaining the alignment of the cameras, object, and turntable. Therefore, this type of modeling is usually done in a studio setting, and is of no use for hand-held systems.
Another method for generating a 3D model of an object of interest assumes a known “position-registration pattern” somewhere in the field of view. The term “position-registration pattern” is used here to indicate a calibration pattern that enables computation of the camera position relative to the pattern, in a fixed coordinate frame defined by the pattern. For example, a checkerboard pattern is placed behind the object while images are acquired. However, this method for computing camera position also has limitations. First, it is difficult to view the object from all directions, unless the position-registration pattern is relocated and the system is re-calibrated. Second, the presence of the pattern makes it more difficult to identify the boundary of the object, as a precursor to further processing for building a 3D model, than would be the case with a bland, low-texture background.
Obviously, the two techniques above are not practical for imaging large-scale, walk-through environments. In that case, the varying position of architectural details in the image, as the camera is moved, can be used to determine camera motion. However, these scenes often includes a large amount of extraneous movement or clutter, such as people, making it difficult to track image features between successive images, and hence making it difficult to extract camera position.
Motion parameters are more easy to resolve when the camera has a wide field of view, because more features in a scene are likely to be visible for use in the motion computation, and the motion computations are inherently more stable when features with a wide angular spacing relative to the observer are used. Computation of camera position/motion is also easier when working from images of rigid structure, or known geo

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Hand-held 3D vision system does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Hand-held 3D vision system, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Hand-held 3D vision system will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3319579

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.