Computer graphics processing and selective visual display system – Computer graphics processing – Animation
Reexamination Certificate
1998-11-04
2001-09-04
Jankus, Almis R. (Department: 2671)
Computer graphics processing and selective visual display system
Computer graphics processing
Animation
Reexamination Certificate
active
06285380
ABSTRACT:
FIELD OF THE INVENTION
The present invention relates to a method and a system for creating real-time, behavior-based animated actors.
BACKGROUND INFORMATION
Cinema is a medium that can suspend disbelief. The audience enjoys the psychological illusion that fictional characters have an internal life. When this is done properly, the characters can take the audience on a compelling emotional journey. Yet cinema is a linear medium; for any given film, the audience's journey is always the same. Likewise, the experience is inevitably a passive one as the audience's reactions can have no effect on the course of events.
This suspension of disbelief, or believability, does not necessarily require a high degree of realism. For example, millions of people relate to Kermit the Frog and to Bugs Bunny as though they actually exist. Likewise, Bunraku puppet characters can create for their audience a deeply profound and moving psychological experience.
All of these media have one thing in common. Every moment of the audience's journey is being guided by talented experts, whether a screenwriter and actor/director, a writer/animator, or a playwright and team of puppeteers. These experts use their judgment to maintain a balance: characters must be consistent and recognizable, and must respond to each other appropriately at all times. Otherwise believability is lost.
In contrast, many current computer games are non-linear, offering variation and interactivity. While it is possible to create characters for these games that convey a sense of psychological engagement, it is extremely difficult with existing tools.
One limitation is that there is no expert, actor, director, animator or puppeteer, actually present during the unfolding drama, and so authors using existing techniques are limited by what they can anticipate and produce in advance.
There have been several recent efforts to build network distributed autonomous agents. Work done by Steve Strassman in the area of “Desktop Theater” explored the use of expressive authoring tools for specifying how characters would respond to direction. (S. Strassman,
Desktop Theater: Automatic Generation of Expressive Animation, PhD thesis
, MIT Media Lab, June 1991.) This work, however, did not deal with real time visual interaction.
The novel “Snow Crash” posits a “Metaverse,” a future version of the Internet which appears to its participants as a quasi-physical world. (N. Stephenson,
Snow Crash
Bantam Doubleday, New York, 1992.) The participants are represented by fully articulate human figures, or avatars whose body movements are computed automatically by the system. “Snow Crash” touches on the importance of proper authoring tools for avatars, although it does not describe those tools.
The present invention takes these notions further, in that it supports autonomous figures that do not directly represent any participant.
Several autonomous actor simulation systems have been developed which follow the parallel layered intelligence model illustrated in M. Minsky,
Society of Mind
, MIT press, 1986. Such a model was partially implemented by the subsumption architecture described by R. Brooks (
A Robust Layered Control for a Mobile Robot, IEEE
Journal of Robotics and Automation, 2(l):14-23, 1986) as well as by J. Bates et al. (
Integrating Reactivity, Goals and Emotions in a Broad Agent
, Proceedings of the 14th Annual Conference of the Cognitive 30 Science Society, Indiana, July 1992) and M. Johnson (
WavesWorld
: PhD Thesis,
A Testbed for Three Dimensional Semi
-
Autonomous Animated Characters, MIT,
1994). Each of these systems, however, solve distinctly different problems than that of the present invention.
The “Jack” system described in N. Badler et al.,
Simulating Humans: Computer Graphics, Animation, and Control
Oxford University Press, 1993 focuses on proper task planning and biomechanical simulation. The general goal of that work is to produce accurate simulations of biomechanical robots. The simulations of Terzopoulis et. al (
Artificial Fishes: Autonomous Locomotion, Perception, Behavior, and Leaming in a Simulated Physical World
, Artificial Life, 1(4):327-351, 1994) have autonomous animal behaviors that respond to their environment according to biomechanical rules. Autonomous figure animation has also been studied by N. Badler et al. (
Making Them Move: Mechanics, Control, and Animation of Articulated Figures
Morgan Kaufmann Publishers, San Mateo, Calif., 1991), M. Girard et al. (
Computational modeling for the computer animation of legged figures
, Computer Graphics, SIGGRAPH '85 Proceedings, 20(3):263-270, 1985), C. Morawetz et al. (Goal-directed human animation of multiple movements, Proc. Graphics Interface, pages 60-67, 1990) and K. Sims (
Evolving virtual creatures
, Computer Graphics, SIGGRAPH '94 Proceedings, 28(3):15-22, 1994).
The “Alive” system of P. Maes et al. (
The Alive System: Full Body Interaction with Autonomous Agents
in Computer Animation '95 Conference, Switzerland, April 1995 IEEE Press, pages 11-18) focuses on self-organizing embodied agents which are capable of making inferences and of learning from their experiences. Instead of maximizing the author's ability to express personality, the “Alive” system uses ethological mechanisms to maximize the actor's ability to reorganize its own personality, based on its own perception and accumulated experience.
In general, however, the above efforts do not focus on the author's point of view. To create rich interactive worlds inhabited by believable animated actors, the need exists to provide authors with the proper tools.
SUMMARY OF THE INVENTION
The present invention is directed to the problem of building believable animated characters that respond to users and to each other in real-time, with consistent personalities, properly changing moods and without mechanical repetition, while always maintaining the goals and intentions of the author.
An object of the method and system according to the present invention is to enable authors to construct various aspects of an interactive application. The present invention provides tools which are intuitive to use, allow for the creation of rich, compelling content and produce behavior at run-time which is consistent with the author's vision and intentions. The animated actors are able to respond to a variety of user-interactions in ways that are both appropriate and non-repetitive. The present invention enables multiple actors to work together while faithfully carrying out the author's intentions, allowing the author to control the choices the actors make and how the actors move their bodies. As such, the system of the present invention provides an integrated set of tools for authoring the “minds” and “bodies” of interactive actors.
In accordance with an embodiment of the present invention, animated actors follow scripts, sets of author-defined rules governing their behavior, which are used to determine the appropriate animated actions to perform at any given time. The system of the present invention also includes a behavioral architecture that supports author-directed, multi-actor coordination as well as run-time control of actor behavior for the creation of user-directed actors or avatars. The system uses a plain-language, or “english-style” scripting language and a network distribution model to enable creative experts, who are not primarily programmers, to create powerful interactive applications.
The present invention provides a method and system for manipulating the geometry of one or more animated characters displayed in real-time in accordance with an actor behavior model. The present invention employs an actor behavior model similar to that proposed by B. Blumberg et al.,
Multi
-
Level Direction of Autonomous Creatures for Real
-
Time Virtual Environments
Computer Graphics (SIGGRAPH '95 Proceedings), 30(3):47-54, 1995.
The system of the present invention comprises two subsystems. The first subsystem is an Animation Engine that uses procedural techniques to enable aut
Goldberg Athomas
Perlin Kenneth
Bakers Botts L.L.P.
Jankus Almis R.
New York University
Santiago Enrique L
LandOfFree
Method and system for scripting interactive animated actors does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Method and system for scripting interactive animated actors, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and system for scripting interactive animated actors will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2534758