Method and system for 3-D content creation

Image analysis – Image transformation or preprocessing – Mapping 2-d image onto a 3-d surface

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C382S154000, C345S419000

Reexamination Certificate

active

06816629

ABSTRACT:

FIELD OF THE INVENTION
The present invention pertains to the areas of image processing and machine processing of images. In particular, the present invention relates to a method for performing photo-realistic 3-D content creation from 2-D sources such as photographs or video.
BACKGROUND INFORMATION
It is often desirable to generate a three-dimensional (“3-D”) model of a 3-D object or scene. A 3-D representation of an object may be utilized in a computer graphics context for presenting 3-D content to users to increase the effectiveness and reality of images. One method for generating 3-D information is achieved utilizing the techniques of projective geometry and perspective projection: a 2-D projective space is a perspectively projected image of a 3-D space.
Typically, image data is obtained utilizing a digital imaging system, which captures a plurality of 2-D digitally sampled representations of a scene or object (scene data sources). In the alternative, an analog source (such as an image obtained from a traditional camera) may be utilized and digitized/sampled using a device such as a scanner. The 2-D digital information representing the scene or object may then be processed using the techniques of computational projective geometry and texture mapping to generate the structure of a 3-D image.
FIG. 1
a
is a flowchart that depicts a general paradigm for generating a 3-D model of a scene or object from a plurality of 2-D scene data sources. The process is initiated in step
151
. In step
154
, a plurality of 2-D scene data sources are generated by generating respective 2-D images of the scene utilizing a variety of perspectives (i.e., camera positions). In step
155
, using the techniques of computational projective geometry, the shape of the image is deduced by determining a 3-D feature set associated with the scene or object. For example, if the prominent features determined in step
153
are a points, a point cloud {X} may be generated. A point cloud is 3-D information {X} for a set of pre-determined points on a desired image or object. In step
157
, a texture mapping process is applied to the point cloud solution to generate a 3-D model of the scene or object. The process ends in step
159
.
Generating a 3-D point cloud set from a plurality of 2-D sources depends upon two interrelated sub-problems: the camera calibration problem and the point-matching problem. Specifically, the camera calibration problem requires calculating the relative camera rotations R and translations T associated with the plurality of 2-D scene data sources. The point matching problem requires determining a correspondence of image points in at least two scene data sources.
One known technique for determining the 3-D shape of an object (the point cloud) relies upon the use of input from two or more photographic images of the object, taken from different points of view. This problem is known as the shape from motion (“SFM”) problem, with the motion being either the camera motion, or equivalently, the motion of the object. In the case of two images, this problem is known also as the stereo-vision problem. The process of extracting a 3-D point cloud from stereo pairs is known as photogrammetry. Once the shape is determined, it is then possible to map the textures of the object from the photographs to the 3-D shape and hence create a photo-realistic virtual model of the object that can then be displayed in standard 3-D viewers such as a VRML (“Virtual Reality Modeling Language”) browser.
In order to solve the SFM problem, the relative positions and orientations of the cameras must be known or calculated. This is known as solving the camera calibration problem. This camera calibration problem can be solved if at least 5-8 points can be matched on each of the images, corresponding to the same physical 3-D points on the actual object or scene (In practice, the number of points required for robust estimation is typically far greater than 8). In place of points, line segments, or complete object sub-shapes may be matched instead of points.
Once the cameras have been calibrated, or simultaneously, object shape can be calculated. Knowing point correspondences, together with the camera calibration provides a sufficient set of constraints to allow calculation of the 3-D positions of all corresponding points utilizing the techniques of projective geometry. If enough points are matched then the object shape emerges as a point cloud. These points can be connected to define surfaces and hence determine the complete 3-D surface shape of objects.
Automatic point-matching algorithms have been developed to match image points from one 2-D image source to another (see [Lew et al] for a brief survey of the better known feature matching algorithms). These automatic point matching algorithms, however, have difficulties when the camera points of view differ significantly. In that case, there is significant distortion in the patterns required for point matching, due to perspective foreshortening, lighting variations, and other causes. For this reason, such approaches tend to work best when the camera points of view are very close to each other, but this limits their applicability.
Another approach, which has been exploited, is to assume that a set of prominent points on the object in the images can be determined, but that the point correspondence between images is not known. In this case, the 3-D constraints are used to solve not only the calibration and shape problem, but also the point-matching problem. If this can be done, then there are many practical cases where automation can be used to find the object shape. This approach is referred to herein as the method of uncalibrated point matching through perspective constraints (“UPMPC”).
Relatively little work has been done to solve the UPMPC problem, partly because it has appeared to be more difficult than solving the point matching directly, or because it appears to require extremely time consuming calculations, proportional to the number of points factorial squared: (N!)(N!). For example, Dellaert et al proposes a UPMPC solution relying upon on a statistical sampling method. Jain proposes a UPMPC solution utilizing a set of constraints, assumptions and a minimization methodology.
However, known UPMPC solutions are generally deficient in two respects. First, these UPMPC methods typically are very slow due to the great computational complexity associated with the (N!)(N!) complexity limiting their application. Second, known UPMPC methods are not typically tolerant to noise and are therefore not robust solutions for use in industry.
Computational Projective Geometry Mathematical Background and Definitions
The following mathematical background and definitions were taken from Kanatani, Geometric Computation for Machine Vision (Clarendon Press, 1993).
FIG. 1
b
depicts an illustration of a camera model, which provides a conventional model for the 3-D interpretation of perspective projection. Lens
114
projects 3-D object
130
onto film
125
as image
120
. Known constant f (focal length) is the distance between lens center
105
and film surface
125
.
FIG. 1
c
depicts a perspective projection of a scene and a relationship between a space point and an image point. Points on image plane
135
are typically designated by a triplet (m
1
, m
2
, m
3
) of real numbers and are referred to as homogeneous coordinates. If m
3
≠0, point (m
1
, m
2
, m
3
) is identified with the point:
x
=
f

m
1
m
3
,
y
=
f

m
2
m
3
on image plane
135
. A line is also defined by a triplet (n
1
, n
2
, n
3
) of real numbers, not all of them being 0. These three numbers are referred to as the homogenous coordinates of the line.
By definition, homogenous coordinates can be multiplied by an arbitrary nonzero number, and the point or line that they represent is still the same. then, they are represented as normalized vectors, or N-vectors, such that:
N

[
u
]
=
u
&LeftDoubleBracketingBar;
u
&RightDoubleBracketingBar;
Space point
150
havin

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Method and system for 3-D content creation does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Method and system for 3-D content creation, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and system for 3-D content creation will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3298641

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.