Visual display methods for in computer-animated speech...

Data processing: speech signal processing – linguistics – language – Speech signal processing – Synthesis

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C704S270000, C345S473000

Reexamination Certificate

active

09960248

ABSTRACT:
A method of modeling speech distinctions within computer-animated talking heads that utilize the manipulation of speech production articulators for selected speech segments. Graphical representations of voice characteristics and speech production characteristics are generated in response to said speech segment. By way of example, breath images are generated such as particle-cloud images, and particle-stream images to represent the voiced characteristics such as the presence of stops and fricatives, respectively. The coloring on exterior portions of the talking head is displayed in response to selected voice characteristics such as nasality. The external physiology of the talking head is modulated, such as by changing the width and movement of the nose, the position of the eyebrows, and movement of the throat in response to the voiced speech characteristics such as pitch, nasality, and voicebox vibration, respectively. The invention also describes modeling a talking head on the facial image of a particular individual.

REFERENCES:
patent: 4177589 (1979-12-01), Villa
patent: 4884972 (1989-12-01), Gasper
patent: 5613056 (1997-03-01), Gasper et al.
patent: 5630017 (1997-05-01), Gasper et al.
patent: 5689618 (1997-11-01), Gasper et al.
patent: 5826234 (1998-10-01), Lyberg
patent: 5880788 (1999-03-01), Bregler
patent: 5907351 (1999-05-01), Chen et al.
patent: 5933151 (1999-08-01), Jayant et al.
patent: 5963217 (1999-10-01), Grayson et al.
patent: 5969721 (1999-10-01), Chen et al.
patent: 5982389 (1999-11-01), Guenter et al.
patent: 5982390 (1999-11-01), Stoneking et al.
patent: 5990878 (1999-11-01), Ikeda et al.
patent: 5995119 (1999-11-01), Cosatto et al.
patent: 6067095 (2000-05-01), Danieli
patent: 6108012 (2000-08-01), Naruki et al.
patent: 6147692 (2000-11-01), Shaw et al.
Cohen, Michael M. and Massano Dominic W.; “Modeling Coarticulation in Synthetic Visual Speech,” Models and Techniques in Computer Animation, N.M. Thalmann and D. Thalmann (eds.); pp. 139 thru 156, Springer-Vertag, Tokyo, 1993.
Garland, Michael and Heckbert, Paul S.; “Surface Simplification Using Quadric Error Metrics,” Carnegie Mellon Universit; pp. 1 thru 8; http://www.cs.cmu.edu.; SIGGRAPH, 1997.
Westbury, John R.; “X-Ray Microbeam Speech Production Database User's Handbook,” Waisman Center on Mental Retardation & Human Development, Cover Page, Table of Contents and Foreward Page, pp. 1 thru 131, Jun., 1994.
Cole, Ron et al.; “Intelligent Animated Agents for Interactive Language Training,” Proceedings of Speech Technology in Language Learning, pp. 1 thru 4, Stockholm, Sweden, 1998.
Stone, Maureen and Lundberg, Andrew; “Three-Dimensional Tongue Surface Shapes of English Consonants and Vowels,” Journal of American Acoustical Society, pp. 3728-3737, vol. 99, No. 6, Jun. 1996.
Munhall, K.G. et al., “X-Ray Film Database for Speech Research,” Journal of American Acoustical Society, pp. 1222-1224, vol. 98, No. 2, Part 1, Aug. 1995.
Le Goff, Bertrand, “Synthese A Partir Du Texte De Visage 3D Parlant Francais,” Doctorate Thesis, pp. 1 thru 253, Oct. 22, 1997.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Visual display methods for in computer-animated speech... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Visual display methods for in computer-animated speech..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Visual display methods for in computer-animated speech... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3723133

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.