System and computer-implemented method for modeling the...

Computer graphics processing and selective visual display system – Computer graphics processing – Three-dimension

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C345S426000

Reexamination Certificate

active

06724383

ABSTRACT:

FIELD OF THE INVENTION
The invention relates generally to the field of computer graphics, computer-aided geometric design and the like, and more particularly to generating a three-dimensional model of an object.
BACKGROUND OF THE INVENTION
In computer graphics, computer-aided geometric design and the like, an artist, draftsman or the like (generally referred to herein as an “operator) attempts to generate a three-dimensional model of an object, as maintained by a computer, from lines defining two-dimensional views of objects. Conventionally, computer-graphical arrangements generate a three-dimensional model from, for example, various two-dimensional line drawings comprising contours and/or cross-sections of the object and by applying a number of operations to such lines which will result in two-dimensional surfaces in three-dimensional space, and subsequent modification of parameters and control points of such surfaces to correct or otherwise modify the shape of the resulting model of the object. After a three-dimensional model for the object has been generated, it may be viewed or displayed in any of a number of orientations.
In a field of artificial intelligence commonly referred to as robot vision or machine vision (which will generally be referred to herein as “machine vision”), a methodology referred to as “shape from shading” is used to generate a three-dimensional model of an existing object from one or more two-dimensional images of the object as recorded by a camera. Generally, in machine vision, the type of the object recorded on the image(s) is initially unknown by the machine, and the model of the object that is generated is generally used to, for example, facilitate identification of the type of the object depicted on the image(s) by the machine or another device.
In the shape from shading methodology, the object to be modeled is illuminated by a light source, and a camera, such as a photographic or video camera, is used to record the image(s) from which the object will be modeled. It is assumed that the orientation of a light source, the camera position and the image plane relative to the object are known. In addition, it is assumed that the reflectance properties of the surface of the object are also known. It is further assumed that an orthographic projection technique is used to project the surface of the object onto the image plane, that is, it is assumed that an implicit camera that is recording the image on the image plane has a focal length of infinity. The image plane represents the x,y coordinate axes (that is, any point on the image plane can be identified by coordinates (x,y)), and the z axis is thus normal to the image plane; as a result, any point on the surface of the object that can be projected onto the image plane can be represented by the coordinates (x,y,z). The image of the object as projected onto the image plane is represented by an image irradiance function I(x,y) over a two-dimensional domain &OHgr;⊂R
2
, while the shape of the object is given by a height function z(x,y) over the domain &OHgr;. The image irradiance fumction I(x,y) represents the brightness of the object at each point (x,y) in the image. In the shape from shading methodology, given I(x,y) for all points (x,y) in the domain, the shape of an object, given by z(x,y), is determined.
In determining the shape of an object using the shape from shading methodology, several assumptions are made, namely,
(i) the direction of the light source is known;
(ii) the shape of the object is continuous;
(iii) the reflectance properties of the surface of the object are homogenous and known; and
(iv) the illumination over at least the portion of the surface visible in the image plane is uniform.
Under these assumptions, the image irradiance function I(x,y) for each point (x,y) on the image plane can be determined as follows. First, changes in surface orientation of the object is given by means of first partial derivatives of the height function z(x,y) with respect to both x and y,
p

(
x
,
y
)
=

z

(
x
,
y
)

x



and



q

(
x
,
y
)
=

z

(
x
,
y
)

y
,
(
1
)
where p-q space is referred to as the “gradient space.” Every point (p,q) of the gradient space corresponds to a particular value for the surface gradient. If the surface is continuous, values for p and q are dependent on each other since the cross-partial-derivatives have to be equal, that is:

p

(
x
,
y
)

y
=

q

(
x
,
y
)

x
.
(
2
)
(Equation (2) holds if the surface is continuous because each partial derivative represents the second partial derivative of the height function z(x,y) with respect to both x and y, and x and y are independent.) Equation (2) is referred to as the “integrability constraint,” which, if it holds, will ensure that the surface is smooth and satisfies equation (1).
The relationship between the image irradiance function I(x,y) and the surface orientation (p,q) is given by a function R(p,q), which is referred to as a reflectance map
I
(
x,y
)=
R
(
p
(
x,y
),
q
(
x,y
))  (3).
Equation (3) is referred to as the “image irradiance equation.” As an example, a relatively simple reflectance map exists for objects which have a Lambertian surface. A Lambertian surface appears to be equally bright from all viewing directions, with the brightness being proportional to the light flux incident on the surface. The reflection R
L
(p,q) is proportional to the cosine of the angle &agr; between a direction that is normal to the surface, which is represented by the vector {right arrow over (x)} and the incident light ray direction, which is represented by the vector {right arrow over (L)}, that is,
R
L
(
p,q
)=cos &agr;=
{right arrow over (n)}·{right arrow over (L)}
  (4),
where {right arrow over (n)}=(p,q,1), given through p(x,y),q(x,y) and {right arrow over (L)}=(x
L
, y
L
,z
L
) gives the direction of the light source.
Typically, shape from shading is performed in two steps. First, the partial derivatives p and q of the height function z(x,y) are determined to get the normal information {right arrow over (n)} and in the second step the height z(x,y) is reconstructed from p and q. The partial derivatives p an q can be determined by solving the system of equations consisting of the image irradiance equation (3) and the integrability constraint equation (2). Since images can be noisy and the assumptions noted above are sometimes not perfectly satisfied, there may be no solution using this methodology, and in any case there will be no unique solution.
SUMMARY OF THE INVENTION
The invention provides a new and improved system and method for generating a three-dimensional model of an object by shading as applied by an operator or the like to a two-dimensional image of the object in the given state of its creation at any point in time.
In brief summary, the invention provides a computer graphics system for facilitating the generation of a three-dimensional model of an object in an interactive manner with an operator, such as an artist or the like. Generally, the operator will have a mental image of the object whose model is to be generated, and the operator will co-operate with the computer graphics system to develop the model. The computer graphics system will display one or more images of the object as currently modeled from rotational orientations, translational positions, and scaling or zoom settings as selected by the operator, and the operator can determine whether the object corresponds to the mental image.
In the model generation process, an initial model for the object is initialized and an image thereof is displayed to the operator by the computer graphics system. The image that is displayed will reflect a particular position of a light source and camera relative to the object, the position of the light source relative to the object defining an illumination direction, and the position of the camera relative to the object defining an im

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

System and computer-implemented method for modeling the... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with System and computer-implemented method for modeling the..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and System and computer-implemented method for modeling the... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3258271

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.