Method of reconstruction of tridimensional scenes and...

Image analysis – Applications – 3-d or stereo imaging analysis

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C382S171000, C382S285000, C382S286000, C348S578000, C348S580000, C345S419000, C345S422000, C345S424000

Reexamination Certificate

active

06661914

ABSTRACT:

The present invention relates to a method of reconstruction of a tridimensional scene from a bidimensional video sequence corresponding to N successive images of a real scene, and to a corresponding reconstruction device and a decoding system.
In light of recent advances in technology (and in the framework of all what is related to the future MPEG4 standard intended to provide means for encoding graphic and video material as objects having given relations in space and time) all what relates to stereo images and virtual environments is becoming an important tool, for instance in engineering, design or manufacturing. Stereo images, usually generated by recording two slightly different view angles of the same scene, are perceived in three dimensions (3D) if said images are considered by pairs and if each image of a stereo pair is viewed by its respective eye. Moreover, in such stereo and virtual reality contexts, a free walkthrough into the created environments is required and possible. This creation of virtual environments is performed by means of picture synthesis tools, typically according to the following steps:
(a) a recovery step of a 3D geometric model of the concerned scene (for instance, by using a facet representation);
(b) a rendering step, provided for computing views according to specific points of view and taking into account all the known elements (for instance, lights, reflectance properties of the facets, correspondence between elements of the real views, . . . )
The reconstruction of a 3D geometric model of a scene however requires to perform an image matching among all available views. In the document “Multiframe image point matching and 3D surface reconstruction”, R. Y. Tsai, IEEE Trans. on Pattern Analysis and Machine Intelligence, vol.PAMI-5, n
o
2, Mar. 1983, pp.159-174, such a correspondence problem is solved by computing a correlation function that takes into account (inside a defined search window, along an axis corresponding to the sampling grid of the input pictures) the information of all the other views in one single pass, providing in this way a rather robust method against noise and periodical structures. The minimum of this function provides an estimate of the depth of the pixel in the center of the search window. Unfortunately, this depth estimate has a non-linear dependence (in 1/x in the most simple case) to the sampling grid. Moreover, the depth map estimation for a surface obtained from one picture cannot be easily compared with the depth map estimation of the same surface obtained from another picture, because they do not share the same reference grid (they are only referenced to their respective picture sampling grid).
A first object of the invention is to propose a scene reconstruction method which no longer shows these drawbacks.
To this end the invention relates to a method of reconstruction such as defined in the preamble of the description and which is moreover characterized in that it comprises in series, for each image, segmented into triangular regions, of the sequence:
(A) a first depth labeling step, in which, each view being considered as the projection of a continuous 3D sheet, a multi-view matching is performed independently on each view in order to get a disparity map corresponding to the depth map of said 3D sheet;
(B) a second 3D model extraction step, in which an octree subdivision of the 3D space is performed and the voxels (volume elements) lying in the intersection of all 3D depth sheets are kept. An octree is a tree-structured representation used to describe a set of binary valued volumetric data enclosed by a bounding cube and constructed by recursively subdividing each cube into eight subcubes, starting at the root node which is a single large cube: octrees are an efficient representation for many volumetric objects since there is a large degree of coherence between adjacent voxels in a typical object.
With such an approach, a correlation function along an axis corresponding to, sampled values of depth in the 3D world coordinates system (constituting a depth sampling grid provided at will by the user) is computed taking all views into account, and the minimum of this function is directly related to an accurate value of depth in said coordinates system (this is a great advantage when multiple depth estimations are obtained from different viewpoints). The depth sampling grid is provided by the user at will and is advantageously chosen regularly spaced, taking however into account some preliminary knowledge about the surface to be reconstructed (for instance if said surface is known to lie within a predefined bound box, which is the case for indoor scenes).
The document U.S. Pat. No. 5,598,515 describes a system and method for reconstructing a tridimensional scene or elements of such a scene from a plurality of bidimensional images of said scene, but according to a complex procedure that is replaced, in the case of the invention, by a much more simple one submitted to successive refinements until convergence is obtained.
According to the invention, said depth labeling step preferably comprises in series an initialisation sub-step, provided for defining during a first iteration a preliminary 3D depth sheet for the concerned image, and a refinement sub-step, provided for defining, for each vertex of each region, an error vector corresponding for each sampled depth to the summation of correlated costs between each of the (N−1) pairs of views (for a sequence of N images) on a window specifically defined for said vertex and storing the index that provides the minimum correlation cost, an additional operation being intended to replace after the first iteration the initialisation sub-step by a projection sub-step provided first for adjusting the position and field of view of the image acquisition device according to its parameters and the vertex map near to the image plane, and then for listing for each vertex the voxels that intersect the line passing through the vertex and the optical center of said acquisition device, in the viewing direction, and selecting the nearest voxel to the image plane. Concerning said 3D model extraction step, it preferably comprises in series a resolution definition sub-step, provided for defining the resolution of the voxel grid, and a voxel selection sub-step, provided for keeping for each view the voxels lying inside the non-empty spaces provided by each depth map and then only keeping voxels lying at the intersection of all non-empty spaces.
Another object of the invention is to propose a reconstruction device allowing to carry out this method.
To this end the invention relates to a device for reconstructing a tridimensional scene from a bidimensional video sequence corresponding to N successive images of a real scene, characterized in that:
(I) each of the N images of the sequence is segmented into triangular regions
(II) said device comprises, for processing each image of said sequence
(A) a depth labeling sub-system, comprising itself in series:
(1) an initialisation device, provided for defining during a first iteration an error vector corresponding for a set of sampled depths to the summation of correlation costs between each of the (N−1) pairs of views and the index providing the minimum correlation cost, the depth value of each vertex of the regions being computed by interpolation between the depths obtained for the neighboring regions;
(2) a refinement device, provided for defining similarly for each vertex an error vector on a previously delimited window and, correspondingly, the index providing the minimum correlation cost;
(B) a reconstruction sub-system provided for selecting the resolution of the voxel grid and keeping, for each view, the voxels lying inside the non-empty spaces provided by each depth map and, finally, only the voxels lying at the intersection of all non-empty spaces;
(III) said depth labeling sub-system also comprises a projection device intended to replace during the following iterations the initialisation device and provided for adjusting the positi

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Method of reconstruction of tridimensional scenes and... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Method of reconstruction of tridimensional scenes and..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method of reconstruction of tridimensional scenes and... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3168151

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.