Method and apparatus for multi-view three dimensional...

Image analysis – Image transformation or preprocessing – Changing the image coordinates

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C382S284000, C382S154000, C348S042000

Reexamination Certificate

active

06571024

ABSTRACT:

The invention relates to an image processing apparatus and, more particularly, the invention relates to method and apparatus for performing three dimensional scene estimation of camera pose and scene geometry, and providing a method for the authentic insertion of a synthetic object into a real scene using the information provided by the scene estimation routine.
BACKGROUND OF THE DISCLOSURE
Seamless three dimensional insertion of synthetic objects into real scene images requires tools to allow a user to situate synthetic objects with respect to real surfaces within a scene. To facilitate the creation of a realistic image, the synthetic objects need to be projected from all the given camera viewpoints of the real scene. The current methodology for inserting a synthetic object into a real scene includes the cumbersome task of tracking and recording the camera pose and calibration for each frame of a sequence. Thus the geometry and orientation of the synthetic object to be inserted can be matched to the camera pose and calibration data for each individual frame. This process of matching geometry and orientation of the synthetic image to the individual frame pose and calibration data is repeated frame to frame in order to maintain the realistic view of the inserted object through a sequence of frames. In current practice, the pose estimation is accomplished by modeling the three dimensional background scene prior to the pose computation. This is a tedious process.
In order to automate the insertion process, it is required that object insertion be performed in as few frames as possible, preferably one, and all the other views of the object be created automatically. For placement of the object with respect to the real scene, accurate albeit limited three dimensional geometry is required, for instance, estimation of local surface patches may suffice. For stable three dimensional appearance change of the object from the given camera positions, a reliable three dimensional camera pose computation is required. Furthermore, since the graphics objects are typically created using Euclidean geometry, it is strongly desirable that the real scene and the camera pose associated with the real scene be represented using Euclidean coordinates. Stability of the pose computation over extended image sequences is required to avoid jitter and drift in the location and appearance of synthetic objects with respect to the real scene.
Therefore, a need exists in the art for a method and apparatus for estimating three dimensional pose (rotation and translation) and three dimensional structure of unmodeled scenes to facilitate the authentic insertion of synthetic objects into a real scene view.
SUMMARY OF THE INVENTION
The present invention provides an apparatus and method of estimating pose and scene structure in extended scene sequences while allowing for the insertion and authentic projection of three dimensional synthetic objects into real views. Generally, given a video sequence of N frames, the invention computes the camera pose (the rotation and translation) without knowledge of a three dimensional model representing the scene. The inventive apparatus executes a multi-view three dimensional pose and structure estimation routine comprising the steps of feature tracking, pairwise camera pose estimation, computing camera pose for overlapping sequences and performing a global block adjustment that provides camera pose and scene geometric information for each frame of a video sequence. The pairwise camera pose estimation may alternately be selected between “key” frames rather than every frame of the sequence. The “key” frames are selected from within a sequence of frames, where the “key” frames are: frames with sufficient parallax motion between the frames; frames that transition between overlapping sets of correspondences; or frames that are regularly sampled if motion within the frame sequence is smooth. A “Match Move” routine may be used to insert a synthetic object into one frame of a video sequence based on the pose and geometric information of the frame, and calculate all other required object views of the synthetic object for the remaining frames using the pose and geometric information acquired as a result of the multi-view three dimensional estimation routine. As such, the synthetic object is inserted into the scene and appears as a “real” object within the imaged scene.


REFERENCES:
patent: 5870136 (1999-02-01), Fuchs et al.
patent: 5963203 (1999-10-01), Goldberg et al.
patent: 5987164 (1999-11-01), Szeliski et al.
patent: 6151009 (2000-11-01), Kanade et al.
patent: 6297825 (2001-10-01), Madden et al.
patent: 6307550 (2001-10-01), Chen et al.
C. Tomasi and T. Kanade, “Shape and Motion from Image Streams under Orthography: A Factorization Method”, International Journal of computer Vision (1992), 9(2), pp. 137-154.
Torr, P. and Zisserman, A., “Robust Parameteriztion And Computation of the Trifocal Tensor”, Image and Vision Computation, vol. 24 (1997), pp. 271-300.
Z. Zhang et al., “A Robust Technique for Matching Two Uncalibrated Images Through the Recovery of the Unknown Epipolar Geometry”, Artificial Intelligence (1995), vol. 78, pp. 87-119.
K. Hanna, N. Okamota, “Combining Stereo and Motion Analysis for Direct Estimation of Scene Structure”, Proc. Fourth Con. on Computer Vision (ICCV'93) May 1993.
P. Perona, Jitendra Malik, “Scale-Space and Edge Detection Using Anisotropic Diffusion”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 12, No. 7, Jul. 1990, pp. 629-639.
J. Bergen, P. Anandan, K. Hanna, R. Hingorani, “Hierarchical Model-Based Motion Estimation”, Proc. of European Conference on Computer Vision-92, Mar. 23, 1992.
G. Adiv, “Inherent Ambiguities in Recovering 3D Information from a Noisy Flow Field”, IEEE PAMI, 11(5) (1989), pp. 477-489.
O. Faugeras and R. Keriven, “Complete Dense Stereovision Using Level Set Methods”, ECCV (1998), pp. 379-393.
A. Fitzgibbon and A. Zisserman, “Automatic Camera Recovery for Closed PR Open Image Sequences”, in ECCV (1998), Frieburg, Germany.
R.I. Hartley, “Estimation of Relative Camera Positions for Uncalibrated Cameras”, In Proc. 2nd European Conference on Computer Vision (1992), pp. 579-587.
R.I. Hartley, et al., “Triangulation”, Proc. DARPA Image Understanding Workshop (1994), pp. 957-966.
R.I. Hartley, “Euclidean Reconstruction from Uncalibrated Views”, In Joint European-US Workshop on Applications of Invariance in Computer Vision (1993).
R. Koch et al., “Multiviewpoint Stereo from Uncalibrated Video Sequences”, in ECCV 91998), Frieburg, Germany.
B.D. Lucas and T. Kanade, “An Iterative Image Registration Technique with an Application to Stereo Vision”, in Image Understanding Workshop (1981), pp. 121-130.
Sawhney, et al., “Robust Video Mosaicing Through Topology Inference And Local to Global Alignment”, ECCV (1998), pp. 103-119.
S. Seitz and C. Dyer, “Photorealistic Scene Reconstruction by Voxel Coloring”, in Proc. Computer Vision and Pattern Recognition Conference (1997), pp. 1067-1073.
R. Szeliski and H. Shum, “Creating Full View Panoramic Image Mosaics and Environment Maps”, in Proc. of SIGGRAPH (1997), pp. 251-258.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Method and apparatus for multi-view three dimensional... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Method and apparatus for multi-view three dimensional..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and apparatus for multi-view three dimensional... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3013614

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.