Method and apparatus for controlling the operation of an...

Data processing: speech signal processing – linguistics – language – Speech signal processing – Synthesis

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C704S261000, C704S268000, C704S270000, C700S001000

Reexamination Certificate

active

07457752

ABSTRACT:
Method and apparatus for controlling the operation of an emotion synthesizing device, notably of the type where the emotion is conveyed by a sound, having at least one input parameter whose value is used to set a type of emotion to be conveyed, by making at least one parameter a variable parameter over a determined control range, thereby to confer a variability in an amount of the type of emotion to be conveyed. The variable parameter can be made variable according to a variation model over the control range, the model relating a quantity of emotion control variable to the variable parameter, whereby said control variable is used to variably establish a value of said variable parameter. Preferably the variation obeys a linear model, the variable parameter being made to vary linearly with a variation in a quantity of emotion control variable.

REFERENCES:
patent: 5367454 (1994-11-01), Kawamoto et al.
patent: 5559927 (1996-09-01), Clynes
patent: 5732232 (1998-03-01), Brush, II et al.
patent: 5765134 (1998-06-01), Kehoe
patent: 5860064 (1999-01-01), Henton
patent: 6160986 (2000-12-01), Gabai et al.
patent: 6175772 (2001-01-01), Kamiya et al.
patent: 6185534 (2001-02-01), Breese et al.
patent: 6804649 (2004-10-01), Miranda
patent: 6947893 (2005-09-01), Iwaki et al.
patent: 6959166 (2005-10-01), Gabai et al.
patent: 6980956 (2005-12-01), Takagi et al.
patent: 2002/0019678 (2002-02-01), Mizokawa
patent: 2002/0026315 (2002-02-01), Miranda
patent: 2004/0019484 (2004-01-01), Kobayashi et al.
Jun Sato, Shigeo Morishima, “Emotion Modeling in Speech Production Using Emotion Space”, IEEE, 1996.
Fumio Kawakami, Shigeo Morishima, Hiroshi Yamada, Hiroshi Harashima, “Construction of 3-D Emotion Space Based on Parameterized Faces”, IEEE, 1994.
Fumio Kawakami, Motohiro Okura, Hiroshi Yamada, Hiroshi Harashima, Shigeo Morishima, “An Evaluation of 3-D Emotion Space”, IEEE, 1995.
Shigeo Morishima, Hiroshi Harashima, “Emotion Space for Analysis and Synthesis of Facial Expression”, IEEE, 1993.
Janet E. Cahn, “The Generation of Affect in Synthesized Speech”, Journal of the American Voice I/O Society, 1990.
Sato J et al: “Emotion modeling in speech production using emotion space” Robot and Human Communication, 1996., 5th IEEE International Workshop on Tsukuba, Japan Nov. 11-14, 1996, New York, NY, USA, IEEE, US, Nov. 11, 1996, pp. 472-477, XP010212883 ISBN: 0-7803-3253-9.
Patent Abstracts of Japan vol. 0165, No. 32 (P-1448), Oct. 30, 1992 & JP 4 199098 A (Meidensha Corp), Jul. 20, 1992.
Yoshinori Kitahara et al: “Prosodic Control to Express Emotions for Man-Machine Speech Interaction” IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, Institute of Electronics Information and Comm. Eng. Tokyo, JP, vol. E75-A, No. 2, Feb. 1, 1992, pp. 155-163, XP000301808 ISSN:0916-8508.
Ignasi Iriondo et al: “Validation of an Acoustical Modelling of Emotional Expression in Spanish Using Speech Synthesis Techniques” ISCA Worksop on Speech and Emotion, Sep. 2000, pp. 1-6, XP007005765.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Method and apparatus for controlling the operation of an... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Method and apparatus for controlling the operation of an..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and apparatus for controlling the operation of an... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-4049981

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.