Realistic surface simulation in computer animation

Computer graphics processing and selective visual display system – Computer graphics processing – Animation

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C345S473000, C345S440000

Reexamination Certificate

active

06300960

ABSTRACT:

FIELD OF THE INVENTION
The invention relates generally to the art of computer generated animation. More particularly, the invention relates to the realistic modeling of characters in computer generated animation.
BACKGROUND OF THE INVENTION
To create a three dimensional computer animation, the animator must move three dimensional objects and characters about the scene by specifying the location of all parts of all objects in every frame, or at least in key frames between which motion can be interpolated. In the case of realistic animation in which the individual characters are highly detailed with many attributes, e.g., possible facial expressions, this can be a monumental task. As a first step, the process often begins with a detailed physical model of the character which is then scanned and stored in the computer as a two dimensional array of control points corresponding to vertices of a mesh which approximates the model's bounding surface. The control point mesh contains sufficient information to recreate the model using either B-Spline patches, a polygon mesh, or recursive subdivision surfaces, in sufficient detail to produce a high quality rendered image.
In the traditional “entire kinematic control” approach to computer generated animation, the animator controls the movement of the character by individually specifying the movement of each of the numerous control points. The difficulty is that a highly detailed human head may be represented by 5000 or more control points. The ability to individually manipulate the locations of that many degrees of freedom is both a blessing and a curse. Though it allows the animator to direct the motion, facial expressions, and gestures of the characters in great detail, independently specifying the motion of so many points is an immensely tedious process.
Moreover, because the locations of the control points are independently variable and not tied to together, physical constraints, e.g., that they belong to a semi-rigid body, must be individually enforced by the animator who must for instance insure that control points do not crash into each other or otherwise behave in an unnatural manner which would give rise to unacceptable artifacts in the final rendered scenes.
One may of course drastically reduce the number of degrees of freedom which must be specified and insure that the above described physical constraints are enforced, by treating all or part of the character as a rigid body thereby tying a set of the control points together. One may for instance limit the degrees of freedom necessary to specify the movement of a character's jaw by treating the lower jaw as a rigid body for which the locations of the control points depend only on their initial locations and the angle of rotation of the jaw.
While the constraints of rigid body kinematics greatly lightens the modeling load and works well for the animation of actual rigid bodies, e.g., toys and robots, and as a first approximation for the large scale movements of extremities, it does not result in the realistic animation of people or animals. Animating faces is particularly problematic both because of the many small movements which make up expressive facial gestures and because skin can be both somewhat loose and elastic, so that realistic facial movements are neither localized nor representable in terms of simple rigid body kinematics, e.g., the skin of ajowly character may move little or not at all, much less exactly track the angle of rotation of his jaw.
One commonly used method for more efficiently specifying the motion of sets control points which utilizes some of the simplifications of rigid body kinematics is the use of animation controls to implement elementary physical movements. In this method, described in more detail below in the discussion of the exemplary embodiment, the motion of numerous points is specified in terms of a single control, as in the rigid body case, but the control points affected by the control are not required to move the same amount. Rather, their relative motion is determined by a weight assigned to each point. Though this basically turns a three dimensional problem into a one dimensional one, it nonetheless requires the articulation engineer to perform the still tedious task of specifying hundreds or thousands of individual weights.
Moreover, the character's desired motion will often require the superposition of multiple elementary simultaneous controls. That is, a single point may be subject to, i.e., have non-zero weights corresponding to, several different simultaneous controls. The ultimate motion of the point is then determined to be the sum of the motions required by each control. Unfortunately this result does not always mirror reality, e.g., if a character opens its jaw and wiggles its ear at the same time, the motion of a point on his cheek is not really the sum of the motions of the point under those two acts performed independently.
One ambitious approach to realistically model the behavior of skin and avoid the tedious and difficult point by point kinematic approach is to model skin as a dynamical system driven by underlying skeletal movements. This approach of building a face from the inside out requires solving the equations of motions for the control points as elements of a semi-elastic damped surface subject to a driving force provided by the animator controlled movement of underlying “skeletal” components.
Even if one could model such a complex system in a way which gave realistic results, from the animator's perspective this pure dynamical approach suffers from two related shortcomings. The first problem is that truth does not always equal beauty. Even if one could accurately model the movement of skin resulting from underlying rigid body kinematics and the dynamics of the connective tissue, the result may not have the appearance desired by the animator. Animators creating computer generated animation as artistic works or for commercial entertainment are more interested in the appearance of reality than reality and often times not even the appearance of reality but rather caricature and exaggeration. In any event, animators often do not want to give up the degree of control over the final image required by a purely dynamical approach to modeling skin and facial features. Nor do they want to give up control of the details of the character and rely instead on the manipulation of a skeleton and musculature and some approximation to the intervening biomechanics.
Relatedly, a dynamical approach because it involves solving equations of motion over time, including the effects of momentum, ties the motion of different frames together into a single trajectory. An animator may vary the trajectory by changing the initial conditions and driving force, but cannot directly alter the animation independently on a frame by frame basis. Again this requires that the animator relinquish a large chunk of artistic freedom, this time to Newtonian mechanics. Accordingly, there is a need for a more efficient means of modeling characters in computer animation which retains for the animator the ability to substantially control the detailed appearance and movement of the character.
SUMMARY OF THE INVENTION
The present invention allows the animator to create realistic character animation including skin and facial gestures by manipulating a detailed model of the actual character while at the same incorporating some of the simplifications and constraints of rigid body kinematics, and the ability to localize the motion that must be explicitly specified, to greatly reduce the number of control points whose locations need to be independently determined. The present invention can be viewed as a combination of the entire kinematic approach and the dynamical approach, retaining the animator control of the former with the potential realism of the latter.
In the exemplary embodiment, the skin, fur, or other covering (including clothing), is modeled as an elastic quasi-static surface that is a copy of, and elastically tied to, an underlying object

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Realistic surface simulation in computer animation does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Realistic surface simulation in computer animation, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Realistic surface simulation in computer animation will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2586321

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.