Wavelet-based facial motion capture for avatar animation

Image analysis – Applications – Target tracking or detecting

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C382S118000, C382S209000, C382S276000

Reexamination Certificate

active

06272231

ABSTRACT:

FIELD OF THE INVENTION
The present invention relates to dynamic facial feature sensing, and more particularly, to a vision-based motion capture system that allows real-time finding, tracking and classification of facial features for input into a graphics engine that animates an avatar.
BACKGROUND OF THE INVENTION
Virtual spaces filled with avatars are an attractive way to allow for the experience of a shared environment. However, existing shared environments generally lack facial feature sensing of sufficient quality to allow for the incarnation of a user, i.e., the endowment of an avatar with the likeness, expressions or gestures of the user. Quality facial feature sensing is a significant advantage because facial gestures are a primordial means of communications. Thus, the incarnation of a user augments the attractiveness of virtual spaces.
Existing methods of facial feature sensing typically use markers that are glued to a person's face. The use of markers for facial motion capture is cumbersome and has generally restricted the use of facial motion capture to high-cost applications such as movie production. Accordingly, there exists a significant need for a vision based motion capture systems that implements convenient and efficient facial feature sensing. The present invention satisfies this need.
SUMMARY OF THE INVENTION
The present invention is embodied in an apparatus, and related method, for sensing a person's facial movements, features or characteristic. The results of the facial sensing may be used to animate an avatar image. The avatar apparatus uses an image processing technique based on model graphs and bunch graphs that efficiently represent image features as jets composed of wavelet transforms at landmarks on a facial image corresponding to readily identifiable features. The sensing system allows tracking of a person's natural characteristics without any unnatural elements to interfere with the person's natural characteristics.
The feature sensing process operates on a sequence of image frames transforming each image frame using a wavelet transformation to generate a transformed image frame. Node locations associated with wavelets jets of a model graph to the transformed image frame are initialized by moving the model graph across the transformed image frame and placing the model graph at a location in the transformed image frame of maximum jet similarity between the wavelet jets at the node locations and the transformed image frame. The location of one or more node locations of the model graph is tracked between image frames. A tracked node is reinitialized if the node's position deviates beyond a predetermined position constraint between image frames.
In one embodiment of the invention, the facial feature finding may be based on elastic bunch graph matching for individualizing a head model. Also, the model graph for facial image analysis may include a plurality of location nodes (e.g.,
18
) associated with distinguishing features on a human face.
Other features and advantages of the present invention should be apparent from the following description of the preferred embodiments, taken in conjunction with the accompanying drawings, which illustrate, by way of example, the principles of the invention.


REFERENCES:
patent: 4725824 (1988-02-01), Yoshioka
patent: 4805224 (1989-02-01), Koezuka et al.
patent: 4827413 (1989-05-01), Baldwin et al.
patent: 5159647 (1992-10-01), Burt
patent: 5168529 (1992-12-01), Peregrim et al.
patent: 5187574 (1993-02-01), Kosemura et al.
patent: 5220441 (1993-06-01), Gerstenberger
patent: 5280530 (1994-01-01), Trew et al.
patent: 5333165 (1994-07-01), Sun
patent: 5383013 (1995-01-01), Cox
patent: 5430809 (1995-07-01), Tomitaka
patent: 5432712 (1995-07-01), Chan
patent: 5511153 (1996-04-01), Azarbayejani et al.
patent: 5533177 (1996-07-01), Wirtz et al.
patent: 5550928 (1996-08-01), Lu et al.
patent: 5581625 (1996-12-01), Connell
patent: 5588033 (1996-12-01), Yeung
patent: 5680487 (1997-10-01), Markandey
patent: 5699449 (1997-12-01), Javidi
patent: 5714997 (1998-02-01), Anderson
patent: 5715325 (1998-02-01), Bang et al.
patent: 5719954 (1998-02-01), Onda
patent: 5736982 (1998-04-01), Suzuki et al.
patent: 5764803 (1998-06-01), Jacquin et al.
patent: 5774591 (1998-06-01), Black et al.
patent: 5802220 (1998-09-01), Black et al.
patent: 5809171 (1998-09-01), Neff et al.
patent: 5828769 (1998-10-01), Burns
patent: 5917937 (1999-06-01), Szeliski et al.
patent: 5982853 (1999-11-01), Libermann
patent: 5995119 (1999-11-01), Cosatto et al.
patent: 6044168 (2000-03-01), Tuceryan et al.
patent: 6052123 (1999-11-01), Lection et al.
patent: 44 06 020 C1 (1995-06-01), None
patent: 0807902 (1997-11-01), None
Face recognition by elastic bunch graph matching by, Laurenze Wiskott et al., 1997.*
Sara, R. et al “3-D Data Acquision and Interpretation for Virtual Reality and Telepresence”,Proceedings IEEE Workshop Computer Vision for Virtual Reality Based Human Communication, Bombay, Jan. 1998, 7 pp.
Sara, R. et al “Fish-Scales: Representing Fuzzy Manifolds”,Proceedings International Conference Computer Vision, ICCV '98, pp. 811-817, Bombay, Jan. 1998.
Wiskott, L. “Phantom Faces for Face Analysis”,Pattern Recognition, vol. 30, No. 6, pp. 837-846, 1997.
Wurtz, R., “Object Recognition Robust Under Translations, Deformations, and Changes in Background”,IEEE Transactions on Patern Analysis and Machine Intelligence, vol. 19, No. 7, Jul. 1997, pp. 769-775.
Akimoto, T., et al, “Automatic Creation of 3-D Facial Models”,IEEE Computer Graphics&Applications., pp. 16-22, Sep. 1993.
Ayache, N., et al, “Rectification of Images for Binocular and Trinocular Stereovision”, InIEEE Proceedings of 9th International Conference on Pattern Recognition, pp. 11-16, 1988, Italy.
Belhumeur, P., “A Bayesian Approach to Binocular Stereopsis”,International Journal of Computer Vision, 19 (3), 1996, pp.237-260.
Beymer, D. J., “Face Recognition Under Varying Pose”, Massachusettes Institute of Technology, Artificial Intelligence Laboratory, A.I. Memo No. 1461, 12/93, pp. 1-13.
Beymer, D. J., “Face Recognition Under Varying Pose”, Massachusetts Institute of Technology, Artificial Intelligence Laboratory research report, 1994, pp. 756-761.
Buhmann, J. et al, “Distortion Invariant Object Recognition by Matching Hierarchically Labeled Graphs”, InProceedings IJCNN International Conference of Neural Networks, Wash., DC, Jun. 1989, pp. 155-159.
DeCarlo, D., et al, “The Integration of Optical Flow and Deformable Models with Applications to Human Face Shape and Motion Estimation”, pp. 1-15, InProceedings, CVPR '96, pp. 231-238.
Devernay, F., et al, “Computing Differential Properties of 3-D Shapes from Stereoscopic Images without 3-D Models”,INRIA, RR-2304, 1994, pp. 1-28.
Dhond, U., et al, “Structure from Stereo-A Review”,IEEE Transactions on Systems, Man, and Cybernetics, vol. 19, No. 6, pp. 1489-1510, Nov./Dec. 1989.
Fleet, D. J., et al, “Computation of Component Image Velocity from Local Phase Information”,International Journal of Computer Vision, vol. 5, No. 1, 1990, pp. 77-104.
Fleet, D.J., et al, “Measurement of Image Velocity”,Kluwer International Series in Engineering and Computer Science, Kluwer Academic Publishers, Boston, 1992, No. 169, pp. 1-203.
Hall, E. L., “Computer Image Processing and Recognition”, Academic Press, 1979, pp. 468-484.
Hong, H., et al, “Online Facial Recognition Based on Personalized Gallery”,Proceedings of International Conference on Automatic Face and Gesture Recognition, pp. 1-6, Japan, Apr. 1997.
Kolocsai, P., et al, Statistical Analysis of Gabor-Filter Representation,Proceedings of International Conference on Automatic Face and Gesture Recognition, 1997, 4 pp.
Kruger, N., “Visual Learning with a priori Constraints”,Shaker Verlag, Aachen, Germany, 1998, pp. 1-131.
Kruger, N., et al, “Principles of Cortical Processing Applied to and Motivated by Artificial Object Recognition”, Institut fur Neuroinformatik,Internal Report 97-17, Oct. 97, pp. 1-12.
Kruger, N., et al, “Autonomous Learning of Object Representation Utilizing Self-Con

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Wavelet-based facial motion capture for avatar animation does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Wavelet-based facial motion capture for avatar animation, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Wavelet-based facial motion capture for avatar animation will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2537637

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.