Computer graphics processing and selective visual display system – Computer graphics processing – Animation
Reexamination Certificate
1998-05-04
2001-06-19
Zimmerman, Mark (Department: 2671)
Computer graphics processing and selective visual display system
Computer graphics processing
Animation
C345S474000, C345S215000
Reexamination Certificate
active
06249292
ABSTRACT:
FIELD OF THE INVENTION
The present invention relates generally to the field of computer generated modeling and, more particularly, to a technique for controlling a presentation of a computer generated object having a plurality of movable components.
BACKGROUND OF THE INVENTION
As is known in the art of computer animation and modeling, the DECface™ product developed by Digital Equipment Corporation provides a computer generated talking synthetic face. The DECface™ computer generated talking synthetic face is a visual complement to the DECtalk™ product, a speech synthesizer also developed by Digital Equipment Corporation. By combining the audio functionality of a speech synthesizer with the graphical functionality of a computer generated talking synthetic face, a variety of engaging user interfaces can be provided. Examples include internet-based agents capable of seeking and retrieving documents on the world-wide web, avatars for chat applications, and front-end interfaces for kiosks.
A technique for adaptively synchronizing an audio signal of a speech synthesizer with a facial image being displayed is described by Waters et al. in U.S. Pat. No. 5,657,426, entitled Method and Apparatus for Producing Audio-Visual Synthetic Speech, issued Aug. 12, 1997, assigned to the assignee of the present application, and hereby incorporated herein by reference. Waters et al. disclose a speech synthesizer that generates fundamental speech units called phonemes, which are converted into audio signals. The phonemes are also converted into visual facial configurations called visemes (i.e., distinct mouth postures). The visemes are grouped into sequences of mouth gestures approximating the gestures of speech. The sequences of mouth gestures are then synchronized to the corresponding audio signals.
While Waters et al. provide a technique for synchronizing audio speech with visual mouth gestures, other gestures, such as those associated with facial or other body movements, are not addressed. That is, Waters et al. do not address providing other gestures which typically accompany mouth gestures during speech.
Also, Waters et al. do not address providing gestures which are not associated with speech. That is, Waters et al. do not address providing other gestures, such as those associated with facial or other body movements, which by themselves are often a means of expression or communication.
Some attempts have been made to provide animated facial and other body gestures. For example, animated facial gestures made up of individual facial components have been provided in accordance with the teachings of Parke, F. and Waters, K., in
Computer Facial Animation,
A K Peters, Ltd. (1996), which is hereby incorporated herein by reference. However, controlling the animation of such animated facial gestures is cumbersome since each individual facial component has to be individually controlled at every instance in time.
In view of the foregoing, it is apparent that previously proposed techniques for providing a computer generated synthetic face do not provide certain features which would make the computer generated synthetic face more realistic. Also, the previously proposed techniques do not allow a computer generated synthetic face to be easily controlled. Accordingly, it would be desirable to provide a technique for providing a more realistic and easily controllable computer generated synthetic face.
OBJECTS OF THE INVENTION
The primary object of the present invention is to provide a technique for controlling a presentation of a computer generated object having a plurality of movable components.
The above-stated primary object, as well as other objects, features, and advantages, of the present invention will become readily apparent from the following detailed description which is to be read in conjunction with the appended drawings.
SUMMARY OF THE INVENTION
According to the present invention, a technique for controlling a presentation of a computer generated object having a plurality of movable components is provided. The technique can be realized by having a processing device such as, for example, a digital computer, receive a gesture element and an audio element. The gesture element represents a gesture (e.g., a smile, a frown, raising eyebrows, a wink, etc.) involving one or more of the plurality of movable components. The audio element represents an audio signal (e.g., speech, a whistle, humming, etc.). The gesture element and the audio element are received by the processing device in a sequential order.
The processing device processes the gesture element and the audio element in the sequential order so that each of the plurality of movable components associated with the gesture element are moved to perform the gesture and the audio signal associated with the audio element is generated during a presentation of the computer generated object. The presentation of the computer generated object can be performed on a monitor such as, for example, a cathode ray tube (CRT).
In accordance with aspects of the invention, the processing device can process the gesture element and the audio element such that the gesture is performed and the audio signal is generated simultaneously.
The computer generated object can be, for example, a computer generated face. In such a case, the plurality of movable components may be facial muscles, the gesture might be a facial expression, and the audio signal might be a speech which includes a particular message.
Beneficially, the gesture element has an associated modifier. The associated modifier can correspond, for example, to a performance rate, such as the speed at which the gesture is performed, or to a performance extent, such as the magnitude at which the gesture is performed. The gesture element can be defined to have a temporal duration. That is, the gesture element can be processed by the processing device such that the gesture is performed over a specified period of time.
In accordance with other aspects of the invention, the gesture element can be defined using at least one gesture component such as, for example, a face muscle or an eyelid. Each gesture component has an associated modifier which corresponds, for example, to a performance extent of the gesture component.
If desired, the gesture element and the audio element can be stored in a memory. The stored gesture element and the audio element can then be retrieved from the memory by the processing device.
In accordance with a further aspect of the present invention, a text file can be created containing the gesture element and the audio element. The processing device can then receive the gesture element and the audio element by reading the gesture element and the audio element from the text file.
In accordance with a still further aspect of the present invention, the processing device also receives a configuration element. The configuration element represents a characteristic of the computer generated object such as, for example, a face type, a voice type, or a speech rate. The configuration element is sequentially received by the processing device along with the other elements. The processing device processes the configuration element in turn so that the characteristic of the computer generated object is generated during the presentation of the computer generated object.
In accordance with a still further aspect of the present invention, the gesture element is typically one of a plurality of previously defined gesture elements. The gesture element can then be defined using one or more of the plurality of previously defined gesture elements.
In accordance with a still further aspect of the present invention, the processing device processes the gesture element such that an additional movement is superimposed upon the movement of at least one of the plurality of movable components associated with the gesture element.
REFERENCES:
patent: 4644582 (1987-02-01), Morishita et al.
patent: 4821029 (1989-04-01), Logan et al.
patent: 4851616 (1989-07-01), Wales et al.
patent: 5048103 (1991-09-01), Leclerc
patent: 5067015 (
Avery Brian L.
Christian Andrew D.
Cesari and McKenna LLP
Compaq Computer Corporation
Padmanabhan Mano
Zimmerman Mark
LandOfFree
Technique for controlling a presentation of a computer... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Technique for controlling a presentation of a computer..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Technique for controlling a presentation of a computer... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2537727