Computer graphics processing and selective visual display system – Computer graphics processing – Animation
Reexamination Certificate
2000-05-03
2001-11-13
Zimmerman, Mark (Department: 2671)
Computer graphics processing and selective visual display system
Computer graphics processing
Animation
C345S474000, C345S649000
Reexamination Certificate
active
06317132
ABSTRACT:
BACKGROUND INFORMATION
The quality of computer animation depends largely on its ability to convey a sense of realism so that the animated figures on the screen resemble the actions of real people and animals. Excellent computer generated animation programs are capable of portraying animated human-like characters which convey and perform human-like emotions and movements. The greater the sense of realism exhibited by an animated character, for example, emotionally interactive image puppets, the more the computer animation program is desired and appreciated by the using public
In order for the animated characters to be able to convey and perform a wide variety of human-like emotions and movements, the computer programmer is generally obligated to write a separate subroutine (module) for every emotion or movement desired to be incorporated in the animated character's persona.
Due to mass storage and feasibility constraints of presently commercially available digital computer systems, computer programmers can only simulate in animated characters a very limited number of emotions. These emotions are the same for each individual animated character and thus the animation program lacks a sense of realism.
Emotional expressiveness denotes the capability of the animated character to simulate human emotions, such as happiness, sadness, sarcasm, and inquisitiveness. Since it is very difficult to exhibit emotion in an animated character, computer programmers have concentrated on giving animated characters the ability to move and interact somewhat like humans. By utilizing vector and matrix mathematics, computer programmers have managed to incorporate into animated characters a few basic kinematic (movement) characteristics, such as translate, rotate, swivel and scale. (See, e.g., Girard et al., “Computational Modeling for the Computer Animation of Legged Figures;
Siggraph '
85
Proceedings
, Vol. 19, No. 3, 1985, pp. 263-270). In addition, computer programmers have managed to incorporate a few human-like gestures into the animated characters. For example, when a human is explaining something, his/her arms may wave about in a particular way. However, most computer animation packages lack the capability of conveying “emotional expressiveness” to the animated character, indicative of the movements or gestures portrayed.
An objective shared by many computer programmers is to have each animated character exhibit a large number of individualistic emotions which correspond to the character's physical gestures.
Therefore, it is an object of the present invention to create animated characters which have the capability of conveying human-like emotions and movements and/or gestures portrayed in order to convey a sense of realism. A further objective of the present invention is to provide a method for conveying a smooth transition from one gesture to another.
An additional object of the present invention is to provide the animator with the capability of simulating a large number of emotions and movements without having to write a separate subroutine (module) for each emotion and movement.
A further object of the present invention is to portray animated video and/or movie characters having good visual representations of the expressiveness of human emotion together with real-time responsiveness.
Still another object of the present invention is to portray animated video and/or movie characters as sensitive to their surroundings, so that, for example, the characters successfully avoid obstacles and navigate openings such as doors.
An additional object of the present invention is to provide a means for restricting character movements, for example, requiring a character to face forward at various times.
These and other objects of the present invention become more apparent in the following sections.
SUMMARY OF THE INVENTION
The present invention relates to a gesture synthesizer for animation of human and animal characters by a succession of video and/or movie images. The invention provides the animator with a library of building block gesture “actions” or modules, such as standing, walking, and specific dancing movements. These gesture actions are preferably combined in sequences, enabling the animator to easily create sophisticated animation sequences, a process which is currently very expensive and requires custom programming. The life-like nature of these building block actions, combined with realistic transitioning from one action to another, enables these animated characters to convincingly convey emotion and to respond to each other and respond to a changing backdrop, all in real time.
The invention incorporates information from procedural texture synthesis and applies that information to the control of the emotional effect of human-like animation figures. Procedural texture synthesis simulates the appearance of realistic texture by combining simple stochastic functions with an interpreted computer language, as described in Perlin, “An Image Synthesizer”,
Sigqraph '
85
Proceedings
, Vol. 19, No. 3, 1985, pp. 287-296. The techniques of procedural texture synthesis are used in the computer industry, appear in commercial 3D rendering packages, and are present in various computer graphic films and television commercials.
The present invention uses the procedural texture synthesis approach to control limb motion. Controlled stochastic functions are used to control limb motion, over time, of an animated character. Different stochastic noises or “textures” (such as the textures noted above) can be combined to exhibit actions that convey different emotions.
This approach allows realistic interaction among animated characters/figures without the need for low-level manual control. The programmer/user can specify a sequence and/or combination of different gesture actions. Each action is implemented as a set of coupled frequency and range of pseudorandom time-varying signals sent to each limb. The transition from one action to the next is smooth and life-like. The present invention allows control over average position, and frequency of undulation, while conveying a “natural” quality to all motion.
The present invention enables one to create animated characters which have the capability of conveying and performing human-like motions and movements, including the ability to navigate obstacles. In addition, the invention allows one to simulate a large number of emotions and movements without having to write a separate subroutine (module) in the computer program for each gesture or to solve complex equations. Further, since the system of the present invention applies stochastic noise to create time varying parameters, it is able to portray a good visual representation of the expressiveness of human emotion, together with real time responsiveness to the corresponding gestures.
In one embodiment of the present invention, a menu of thirty gesture actions, such as “walking”, “latin rumba”, and “fencing”, are presented to the user. The user would move a cursor to select the desired gesture actions and their order and timing. The motion of the animated character body is defined at the joint level by a joint-by-joint convex sum of weighted actions. Each body part, such as a head, a wrist, a lower arm, an upper arm, a lower leg, an upper leg, etc. is attached at a joint. Preferably, there are nineteen joints. Each joint has three independent axes of rotation (i.e., about the x, y and z axes). Each axis of rotation is associated with two action—specific key reference angles (parameters). Each axis is further associated with a third parameter which is a function of time, and which may be expressed in terms of a sine, cosine or noise function. This third parameter (time) controls the speed of movement by determining at any given moment where, in the allowable range of rotation, each body part is positioned.
The sequence from one action to the next action is selected by the animator/user. A smooth and natural-looking transition from one action to the next selected action is automatically generated by the so
Baker & Botts L.L.P.
New York University
Nguyen Kimbinh T.
Zimmerman Mark
LandOfFree
Computer animation method for creating computer generated... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Computer animation method for creating computer generated..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Computer animation method for creating computer generated... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2617849