Data processing: artificial intelligence – Adaptive system
Reexamination Certificate
1998-03-13
2001-04-17
Davis, George B. (Department: 2122)
Data processing: artificial intelligence
Adaptive system
C706S018000, C706S020000, C706S026000
Reexamination Certificate
active
06219657
ABSTRACT:
BACKGROUND OF THE INVENTION
1. Field of the Invention
This invention relates to devices and methods for creation of motions in electronic apparatuses containing interfaces such as the artificial life and artificial agent. This application is based on patent application No. Hei 9-78918 filed in Japan, the content of which is incorporated herein by reference.
2. Prior Art
Recently, electronic apparatuses such as the home electronic apparatuses and office automation apparatuses are designed to have multi functions and complicated configurations. As for the electronic apparatus which is designed to have multi functions, the recent technology realizes a human interface which is capable of increasing an efficiency to handle the apparatus. For example, the recent technology provides the bar code input system and voice input/output system. Conventionally, complicated manual operations are required to input instructions to the apparatus. Those manual operations are replaced by simple button operations. Combinations of the simple button operations are replaced by “collective” bar code inputs. Then, the advanced apparatus is capable of accepting the voice instructions using the natural language which the user is familiar with. Progresses are made on responses from the apparatuses. Previously, the apparatus merely executes the instructions. Nowadays, the apparatus is capable of sending a response showing acceptance of the instruction(s). In the case of the reservation of videotape recording on the videotape recorder, for example, when the user accomplishes the reservation of videotape recording, the videotape recorder automatically indicates a videotape recording reservation mark on a certain section relating to a timer display of a video display screen thereof. At completion of the reservation, a television set connected to the videotape recorder visually displays a string of symbols (or characters) or natural language for declaring acceptance of the reservation on a screen thereof. In addition, the natural language is vocalized so that a speaker of the television set produces human voices representing a short sentence as follows:
“Reservation is completed (or accepted)”.
Nowadays, the technology is developed to gradually actualize a simplified interface whose operation is simplified as described above. Now, engineers tend to pay an attention to the method to simulate the operation of the interface as if a personified agent performs the operation. Such personification will make the user to increase his or her expectation to the interface. However, too much increased expectation may cause dissatisfaction of the user against the present level of the interface which the user may not please so much. To eliminate such dissatisfaction of the user against the interface, the paper of Japanese Patent Laid-Open Publication No. 6-12401 provides a new technology which tries to bring (simulated) emotions in the personified agent.
According to the conventional personified agent described above, the emotions are realized by changing one parameter with respect to a single situation or by changing multiple parameters independently with respect to a single situation. For this reason, if the effects given from the external are unchanged, an amount of variations of the emotions should be directly (or univocally) determined, regardless of the present emotional situation. So, as compared with the “actual” biological variations of the emotions, the personified agent is subjected to “unnatural” variations of the emotions.
In addition, the conventional personified agent is designed to accept only the pre-defined situations given from the external. So, the conventional personified agent does not change emotions in response to the non-defined situation(s). For this reason, the conventional personified agent lacks diversity in variations of the emotions.
Another method is provided to control the personified agent using random numbers for variations of the emotions. However, such a method may produce emotions whose variations are unnatural (or strange).
SUMMARY OF THE INVENTION
It is an object of the invention to provide a device and a method for creation of emotions, which are capable of creating emotions whose variations are natural and biological.
A device and a method for creation of emotions according to this invention are provided for an interface of information, such as an artificial agent and a personified agent, intervened between a human being (i.e., user) and an electronic apparatus.
According to one aspect of the invention, an emotion creating device is configured by a neural network, a behavior determination engine and a feature determination engine. The neural network inputs user information, representing conditions of the user, and apparatus information, representing conditions of the apparatus, so as to produce emotional states. Herein, a present set of emotional states are produced in consideration of a previous set of emotional states. The emotional states represent prescribed emotions such as pleasure, anger, sadness and surprise. The behavior determination engine refers to a behavior determination database using the user information and the emotional states of the neural network so as to determine a behavior of the interface. The feature determination engine refers to a database using the emotional states of the neural network to determine a feature of the interface, which corresponds to a facial feature.
According to another aspect of the invention, an emotion creating method is actualized using programs which are run by a computer to realize functions of the emotion creating device. Herein, the programs and data are stored in a recording media.
REFERENCES:
patent: 5497430 (1996-03-01), Sadovnik et al.
patent: 5724484 (1998-03-01), Kagami et al.
patent: 5774591 (1998-06-01), Black et al.
patent: 7-72900 (1995-03-01), None
patent: 7-104778 (1995-04-01), None
patent: 8-339446 (1996-12-01), None
patent: 10-49188 (1998-02-01), None
Avent et al, “Machine Vision Recognition of Facial Affect Using Backpropagation Neural Networks”, IEEE Proceedings of the 16th Annual International Conference on New Opportunities for Biomedical Engineers, Engineering in Medicine and Biology Society, Nov. 1994.*
Yamada et al, “Pattern Recognition of Emotion with Neural Network”, IEEE International Conference on Industrial Electronics, Control and Instrumentation, Nov. 1995.*
Sato et al, “Emotion Modeling in Speech Production Using Emotion Space”, IEEE 5th International Workshop on Robot and Human Communication, Nov. 1996.*
Pramadihanto et al, “Face Recognition from a Single View Based on Flexible Neural Network Matching”, IEEE 5th International Workshop on Robot and Human Communication, Nov. 1996.*
Ding et al, “Neural Network Structures for Expression Recognition”, Proceeding of IEEE 1993 International Conference on Neural Networks, 1993.*
Takacs et al, “Locating Facial Features Using SOFM”, IEEE Proceedings of the 12th IAPR International Conference on Pattern Recognition, Oct. 1994.*
Vincent et al, “Precise Location of Facial Features by a Hierarchical Assembly of Neural Nets”, IEEE 2nd International Conference of Artificial Neural Networks, 1991.*
Morishima et al, “Emotion Space for Analysis and Synthesis of Facial Expression”, IEEE International Workshop on Robot and Human Communication, 1993.*
Kawakami et al, “Construction of 3-D Emotion Space Based on Parameterized Faces”, IEEE International Workshop on Robot and Human Communication, 1994.*
Kawakami et al, “An Evaluation of 3-D Emotion Space” IEEE International Workshop on Robot and Human Communication, 1995.*
Morishima et al, “A Facial Image Synthesis System for Human-Machine Interface”, IEEE International Workshop on Robot and Human Communication, 1992.*
Morishima et al, “Image Synthesis and Editing System for a Multi-Media Human Interface with Speaking Head” IEEE Inter. Conf. on Image Processing and Its Applications, 1992.*
Morishima et al, “A Media Conversion from Speech to Facial Image for Intelligent Man-Machine Interface”, IEEE Journal on Selected Areas in
Davis George B.
Foley & Lardner
NEC Corporation
LandOfFree
Device and method for creation of emotions does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Device and method for creation of emotions, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Device and method for creation of emotions will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2487871