Estimation of 3-dimensional shape from image sequence

Image analysis – Applications – 3-d or stereo imaging analysis

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C345S419000

Reexamination Certificate

active

06628819

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to a method which extracts a 3-dimensional shape of an object from a sequence of pictures such as moving pictures that are taken by use of a digital video camera or the like.
2. Description of the Related Art
One of the important research subjects in the field of computer vision is how to find a 3-dimensional shape of an object from moving pictures or a sequence of still pictures, which are taken by using a digital video camera, a digital still camera, or the like. This technology has utility in various application fields such as robot vision, automatic cruising vehicle, mechanic data entry via a video camera, image coding, 3-dimensional modeling, etc., and is an important topic of today in these application fields.
In order to extract 3-dimensional information from a sequence of 2-dimensional images, a scheme called Structure from Motion obtains an estimate of a shape from depth information, which is obtained from motion information. Namely, camera movement is obtained first, and, then, distances of object features from the camera center are obtained to generate an estimate of the object shape. Since feature points show very small positional shifts from one frame to another in moving pictures, however, it is almost impossible to identify the motion as either a parallel motion or a rotational motion. Because of this, solutions of the depth information may become infeasible solutions, resulting in unsuccessful reconstruction of shape information. When a time sequence is obtained at large sampling intervals, on the other hand, feature points show large movement between frames. In this case, however, reliability in feature point matching decreases.
In order to obtain stable solutions, Tomasi and Kanade presented a factorization method, which calculates motion and shape concurrently (C. Tomasi and T. Kanade, “Shape and motion from image stream under orthography: A factorization method,” International Journal of Computer Vision, vol.9, 1992, pp. 137-254, the contents of which are hereby incorporated by reference). This method employs linear matrix representation based on a linear projection model, and uses singular value decomposition, which is robust against numerical errors. This method can obtain quite stable solutions, which is a feature contrasting this method from other schemes.
Further, Poelman and Kanade presented another factorization method based on a paraperspective projection model, which more closely approximates the perspective projection of an actual camera system than the linear projection model, and maintains linear-matrix formalization of the problem to be solved (C. J. Poelman and T. Kanade, “A paraperspective factorization method for shape and motion recovery,” IEEE transaction on Pattern Analysis and Machine Intelligence, vol.19, no.3, pp.206-218, the contents of which are hereby incorporated by reference).
In the following, the paraperspective projection model and the factorization method based thereon will be described.
The paraperspective projection model takes into account both a scaling effect and a positioning effect of the perspective projection while maintaining benefits of linearity of the linear projection system. The scaling effect refers to the fact that the closer an object to a viewpoint, the larger the object appears. The positioning effect refers to the fact that an object positioned near an edge of a picture frame appears at a different angle from an object positioned near a projection center. According to the paraperspective projection model, a projection of an object onto an image plane is obtained through the following steps:
1) define an imaginary plane parallel to the image plane and including a center of gravity of the object;
2) obtain projections of object points onto the imaginary plane by tracing projections parallel to a line connecting between a camera center and the center of gravity; and
3) obtain projections of the object points from the imaginary plane onto the image plane via a perspective projection model.
FIG. 1
is an illustrative drawing for explaining the paraperspective projection model.
In
FIG. 1
, an image plane
2
is provided at a focal distance from a camera center
1
. A center of gravity C is obtained with respect to a set of object feature points, pictures of which are taken by the camera. Some of the object feature points are shown in the figure as solid squares. An imaginary plane
3
is parallel to the image plane
2
, and includes the center of gravity C. An origin of world coordinates is positioned at the center of gravity C, and 3-dimensional coordinates of a feature point p is represented by s
p
&egr;R
3
.
In an image frame f that is taken out of an image sequence, the camera center
1
has world coordinates t
f
. Further, 2-dimensional local coordinates on the image plane
2
have base vectors i
f
, j
f
&egr;R
3
(∥i
f
∥=∥j
f
∥=1, i
f
×j
f
=0), and an optical axis of the camera is represented by a base vector k
f
=i
f
×j
f
&egr;R
3
. In the image frame f, a 2-dimensional local coordinate system &Sgr;
f
=(O
f
; j
f
, i
f
) is defined, where an origin O
f
is an intersecting point between the vector k
f
and the image plane
2
.
In the paraperspective projection model, a projection of the feature point p onto the image plane
2
is obtained through the following two steps, as previously described. At the first step, the feature point p is projected onto the imaginary plane
3
. This projection is made in parallel to a line that passes through the camera center
1
and the center of gravity C. At the second step, the projection of the feature point on the imaginary plane
3
is further projected onto the image plane
2
via perspective projection. The projection of the feature point p onto the image plane
2
has coordinates (u
fp
, v
fp
) in the 2-dimensional local coordinate system &Sgr;
f
=(O
f
; i
f
, j
f
). Here, the focal distance of the camera is assumed to be 1. The coordinates (u
fp
, v
fp
) are represented as:
u
fp
=m
f
·s
p
+x
f
v
fp
=n
f
·s
p
+y
f
  (1)
where
z
f
=(−
t
f

k
f
x
f
=(−t
f

i
f
/z
f
, y
f
=(−
t
f

j
f
/z
f
  (2)
m
f
=(
i
f
−x
f
k
f
)/
z
f
, n
f
=(
j
f
−y
f
k
f
)/
z
f
Here, z
f
is a distance from the camera center
1
to the imaginary plane
3
, and (x
p
, y
p
) is a point where the projection of the center of gravity C is positioned on the image plane
2
via perspective projection. Further, coordinates (U
fp
, V
fp
), which represent the projection of the feature point p onto the image plane
2
as obtained directly through perspective projection, are represented as:
U
fp
=i
f
·(
s
p
−t
f
)/
z
fp
, V
fp
=j
f
·(s
p
−t
f
)/
z
fp
z
fp
=k
f
·(
s
p
−t
f
)  (3)
When a Taylor expansion of the coordinates (U
fp
, V
fp
) around z
f
is taken into consideration, it can be seen that the paraperspective projection model is a first-order approximation of the perspective projection model under the assumption of:
|
s
p
|
2
/z
f
2
≅0  (4)
In what follows, the factorization method will be described. In the factorization method, P feature points are tracked through F image frames. Then, the 2-dimensional local coordinates (u
fp
, v
fp
) of the P feature points (p=1, 2, . . . , P) over the F frames (f=1, 2, . . . , F) on the image plane
2
are obtained as a 2Fx P matrix:
W
=
[
u
11

u
1

p

u
fp

u
F1

u
Fp
v
11

v
1

p

v
fp

v
F1

v
Fp
]
(
5
)
Hereinafter, the matrix W is referred to as a tracking matrix. An upper half of the tracking matrix represents x coordinates u
fp
of the feature points, and a lower half represents y coordinates v
fp
of the feature points. Each column of the tracking matrix shows coordinates of a single feature point tracked over the F frames, an

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Estimation of 3-dimensional shape from image sequence does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Estimation of 3-dimensional shape from image sequence, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Estimation of 3-dimensional shape from image sequence will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3066995

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.