Application of personality models and interaction with...

Data processing: artificial intelligence – Neural network – Learning task

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C706S015000

Reexamination Certificate

active

06526395

ABSTRACT:

FIELD OF THE INVENTION
This invention is related to the field of use of artificial intelligence. More particularly, this invention is directed to application of personality models and interaction with synthetic characters in a computing system.
BACKGROUND
Computer systems attempting to provide more “human-like” interfaces often employ such technologies as speech recognition and voice control as command input interfaces, and synthesized speech and animated characters as output interfaces. In other words, these computers provide interactions through use of simulated human speech and/or animated characters.
Potential applications for these input/output interfaces are numerous, and offer the possibility of allowing people who are not computer proficient to use a computer without learning the specifics of a particular operating system. For example, an application may include a personality within the computer to simulate a personal assistant, thus creating a different interface to databases and/or schedules. In applications for entertainment, a system with these capabilities may implement role-playing in games, simulate interaction with historical figures for education purposes, or simulate interaction with famous rock singers or movie stars.
Currently, systems are focused on understanding speech content, and reacting to the words. Although this is a challenging endeavor in itself, once some of these obstacles are overcome, it will be important to also interpret other aspects of the interaction if it is desired to achieve a more natural interaction between humans and computers. Moreover, even if the state of the art of speech recognition dramatically improves, a combination of interfaces will increase the quality and the accuracy of the interface/interaction of the computer.
SUMMARY OF THE INVENTION
In one embodiment, an apparatus includes a video input unit and an audio input unit. The apparatus also includes a multisensor fusion/recognition unit coupled to the video input unit and the audio input unit, and a processor coupled to the multisensor fusion/recognition unit. The multisensor fusion/recognition unit decodes a combined video and audio stream containing a set of user inputs.


REFERENCES:
patent: 5608839 (1997-03-01), Chen
patent: 5873057 (1999-02-01), Eves et al.
patent: 5987415 (1999-11-01), Breese et al.
patent: 6021403 (2000-02-01), Horvitz et al.
patent: 6185534 (2001-02-01), Breese et al.
patent: 6212502 (2001-04-01), Ball et al.
patent: 6230111 (2001-05-01), Mizokawa
patent: 6249780 (2001-06-01), Mizokawa
patent: 6430523 (2002-08-01), Mizokawa
Perlin et al.; “Improv: A System for Scripting Interactive Actors in Virtual Worlds”. Proceedings of the 23rdAnnual Conference on Computer Graphics and Interactive Techniques, Apr. 1996, p. 205-216.*
Alm et al.; “Computer Aided Conversation for Serverely Physically Impaired Non-Speaking People”. Conference on Human Factors and Computing Systems, May 1993, pp. 236-241.*
Shah et al.; “An Image/Speech Relational Database and its Application”. IEE Colloquium on Itegrated Audio-Visual Processing for Recognition, Synthesis and Communication, Nov. 1996, p. 6/1-6/5.*
Yamada et al.; “Pattern Recognition of Emotion with Neural Network”. Proceedings of the 1995 IEEE IECON 21stInternational Conference on Industrial Electronics, Control, and Instrumentation, Nov. 1995, vol. 1, p. 183-187.*
Kettebekov et al.; “Toward Multimodal Interpretation in a Natural Speech/Gesture Interface”. Proceedings of the 1999 International Conference on Information Intelligence and Systems, Nov. 1999, p. 328-335.*
Duchnowski et al.; “Toward Movement-Invariant Automatic Lip-Reading and Speech Recognition”. 1995 International Conference on Acoustics, Speech, and Signal Processing, May 1995, vol. 1, p. 109-112.*
Yoshitaka et al.; “SRAW: A Speech Rehabilitation Assistance Workbench-Speech Ability Evaluation by Audio-Video Input”. IEEE International Conference on Multimedia Computing and Systems, vol. 2, pp. 772-777.*
Nakamura et al.; “MMD: Multimodal Multi-View Integrated Database for Human Behavior Understanding”. Proceedings of the 3rdIEEE International Conference on Automatic Face and Gesture Recognition, Apr. 1998, p. 540-545.*
De Silva et al.; “Facial Emotion Recognition Using Multi-Modal Information”. Proceedings of the 1997 International Conference on Information, Communication, and Signal Processing, Sep. 1997, vol. 1, p. 397-401.*
Brooke et al.; “Making Talking Heads and Speech-Reading with Computers”. IEE Colloquium on Integrated Audio-Video Processing for Recognition, Synthesis and Communication, Nov. 1996, p. 2/1-2/6.*
Karaorman et al.; “An Experimental Japanese/English Interpreting Video Phone System”. Proceedings of the 4thInternational Conference on Spoken Language, Oct. 1996, vol. 3, p. 1676-1679.*
Boothe, H. H.; “Audio to Audio-Video Speech Conversion with the Help of Phonetic Knowledge Integration”. 1997 IEEE International Conference on Systems, Man, and Cybernetics, Oct. 1997, vol. 2, p. 1632-1637.*
Chen et al., Multimodal Human Emotion/Expression Recognition, 3rd IEEE International Conference on Automatic Face and Gesture Recognition, Apr. 1998, pp. 366-371.*
Thalmann et al., Face to Virtual Face, Proceedings of the IEEE, May 1998, vol. 86, No. 5, pp. 870-883.*
Goto et al., Multimodal Interaction in Collaborative Virtual Environments, Proceedings of the 1999 International Conference o Image Processing, Oct. 1999, vol. 3, pp. 1-5.*
Kettebekov et al., Toward Multimodal Interpretation in a Natural Speech/Gesture Interface, Proceedings of the 1999 International Conference on Information Intelligence and Systems, Oct. 1999, pp. 328-335.*
Pavlovic′, V., Multimodal Tracking and Classification of Audio-Visual Features, 1998 International Conference on Image Processing, Oct. 1998, vol. 1, pp. 343-347.*
Ostermann, J., Animation of Synthetic Faces in MPEG-4, Proceedings of Computer Animation 1998, Jun. 1998, pp. 49-55.*
Pavlovic′ et al., Integration of Audio/Visual Information For Use In Human-Computer Intelligent Interaction, International Conference on Image Processing, Oct. 1997, vol. 1, pp. 121-124.*
Brooke et al., Making Talking Heads and Speech-Reading With Computers, IEE Colloquium on Integrated Audio-Visual Processing for Recognition, Synthesis and Communication, Nov. 1996, pp. 2/1-2/6.*
Brooke, N.M., Using The Visual Components In Automatic Speech Recognition, Proceedings of the 4th International Conference on Spoken Language, Oct. 1996, vol. 3, pp. 1656-1659.*
Miyake et al., A Gesture Controlled Human Interface Using an Artificial Retina Chip, IEEE Lasers and Electo-Optics Society Annual Meeting, Nov. 1996, vol. 1, pp. 292-293.*
Lavagetto et al., Synthetic and Hybrid Imaging In The Humanoid and Vidas Projects, International Conference on Image Processing, Sep. 1996, vol. 3, pp. 663-666.*
Guiard-Marigny et al., 3D Models of the Lips for Realistic Speech Animation, Proceedings of Computer Animation, Jun. 1996, pp. 80-89.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Application of personality models and interaction with... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Application of personality models and interaction with..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Application of personality models and interaction with... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3133882

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.