Apparatus and method for tracking facial motion through a sequen

Image analysis – Image transformation or preprocessing

Patent

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

382100, 382293, G06F 936

Patent

active

058022201

ABSTRACT:
A system tracks human head and facial features over time by analyzing a sequence of images. The system provides descriptions of motion of both head and facial features between two image frames. These descriptions of motion are further analyzed by the system to recognize facial movement and expression. The system analyzes motion between two images using parameterized models of image motion. Initially, a first image in a sequence of images is segmented into a face region and a plurality of facial feature regions. A planar model is used to recover motion parameters that estimate motion between the segmented face region in the first image and a second image in the sequence of images. The second image is warped or shifted back towards the first image using the estimated motion parameters of the planar model, in order to model the facial features relative to the first image. An affine model and an affine model with curvature are used to recover motion parameters that estimate the image motion between the segmented facial feature regions and the warped second image. The recovered motion parameters of the facial feature regions represent the relative motions of the facial features between the first image and the warped image. The face region in the second image is tracked using the recovered motion parameters of the face region. The facial feature regions in the second image are tracked using both the recovered motion parameters for the face region and the motion parameters for the facial feature regions. The parameters describing the motion of the face and facial features are filtered to derive mid-level predicates that define facial gestures occurring between the two images. These mid-level predicates are evaluated over time to determine facial expression and gestures occurring in the image sequence.

REFERENCES:
patent: 4975960 (1990-12-01), Petajan
patent: 5067014 (1991-11-01), Bergen et al.
patent: 5259040 (1993-11-01), Hanna
patent: 5557684 (1996-09-01), Wang et al.
patent: 5581276 (1996-12-01), Cipolla et al.
Bergen, James R. and Peter J. Burt. "A Three-Frame Algorithm for Estimating Two-Component Image Motion," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 14, No. 9, pp. 886-896, Sep. 1992.
Black, M.J., Yacoob, Y., "Recognizing Facial Expressions under Rigid and Non-Rigid Facial Motions," published at International Workshop on Face & Gesture Recognition, Zurich, Jun. 1995.
Black, M.J., Yacoob, Y., "Tracking and Recognizing Facial Expressions in Image Sequences, using Local Parameterized Models of Image Motion," published at Center for Automation Research, University of Maryland, Jan. 1995.
Chow et al., "Towards a System for Automatic Facial Feature Detection," Pattern Recognition, 26(12): 1739-1755, 1993.
Yacoob et al., "Labeling of Human Face Components From Range Data," Proc. Computer Vision and Pattern Recognition, CVPR-94, pp. 592-593, New York, NY, Jun. 1993.
Koenderink et al., "Invariant Properties of the Motion Parallax Field Due to the Movement of Rigid Bodies Relative to An Observer," Optica Acta, 22(9):773-791, 1975.
Toelg et al., "Towards an Example-Based Image Compression Architecture for Video-Conferencing," MIT Artificial Intelligence Laboratory and Center For Biological And Computational Learning Department Of Brain And Cognitive Sciences, A.I. Memo Nol 1494 & C.B.C.L. Memo No. 100, Jun. 1994.
Rousseeuw, P.J., Leroy, Annick M., "Introduction," Robust Regression and Outlier Detection, John Wiley & Sons, pp. 1-18.
Bergen et al., "Multiple Component Image Motion: Motion Estimation", David Sarnoff Research Center, Princeton, NJ, Jan. 1990, pp. 1-24.
D. Tock and I. Craw, "Tracking and Measuring Drivers Eyes". Christopher M. Brown and Demetri Terzopoulos, editors, Real-time Computer Vision. Publications of the Newton Institute, pp. 71-89.
McLachlan, G.J., Basford, K.E., "Mixture Likelihood Approach to Clustering," Mixture Models Inference and Applications to Clustering, Marcel Dekker, Inc., New York, pp. 9-21.
Shapiro, L.S., Brady, M. and Zisserman, A., "Tracking Moving Heads," Real-time Computer Vision. Publications of the Newton Institute, pp. 35-69.
Koch, R., "Dynamic 3-D Scene Analysis through Synthesis Feedback Control," IEEE Transactions on Pattern Analysis and machine Intelligence, vol. 15, No. 6, Jun. 1993, pp. 556-568.
Yuille, A., Hallinan, P., "Deformable Templates," Active Vision, Andrew Blake and Alan Yuille, editors; pp. 21-38.
Azarbayejani et al., "Visually controlled graphics," IEEE Transactions on Pattern Analysis and Machine Intelligence, 15(6):602-604, Jun. 1993.
Kass et al., "Snakes: Active Counter Modes," First International Conference on Computer Vision, pp. 259-268, Jun. 1987.
Essa et al., "Tracking Facial Motion," Proceedings of the Workshop on Motion of Non-rigid and Articulated Objects, pp. 36-42, Austin Texas, Nov. 1994.
Li et al., "3-D Motion Estimation in Model-based Facial Image Coding," IEEE Transactions on Pattern Analysis and Machine Intelligence, 14(6):545-555, Jun. 1993.
Rosenblum et al., "Human Emotion Recognition From Motion Using A Radial Basis Function Network Architecture," Proceedings of the Workshop on Motion of Non-rigid and Articulated Objects, Austin Texas, Nov. 1994.
Yacoob et al., "Computing Spatio-Temporal Representations of human faces," Proc. Computer Vision and Pattern Recognition, CVPR-94, pp. 70-75, Seattle WA, Jun. 1994.
Essa et al., A Vision System for Observing and Extracting Facial Action Parameters, Proc. Computer Vision and Pattern Recognition, CVPR-94, pp. 76-83, Seattle WA, Jun. 1994.
Terzopoulos et al., "Analysis and Synthesis of Facial Images Sequences Using Physical and Anatomical Models," IEEE Transactions on Pattern Analysis and Machine Intelligence, 15(6):569-579, Jun. 1993.
Bergen et al., "Hierarchical Model-Based Motion Estimation," Proc. of Second European Conference on Computer Vision, ECCV-92, vol. 588 of LNCS-Series, pp. 237-252, Springer-Verlag, May 1992.
Black et al., "The Robust Estimation of Multiple Motions: Affine and piecewise-smooth flow fields," Technical Report P93-00104, Xerox PARC, Dec. 1993.
Black et al., "A Framework for The Robust Estimation of Optical Flow," Proc. mt. Conf. on Computer Vision, ICCV-93, pp. 231-236, Berlin, Germany, May 1993.
Yuille et al., "Feature Extraction From Faces Using Deformable Templates," Proc. Computer Vision and Pattern Rcognition, CVPR-89, pp. 104-109, Jun. 1989.
Ekman, "Facial Expressions Of Emotion: An Old Controversy and New Findings," Philosophical Transactions of the Royal Society of London, B(335):63-69, 1992.
Bassilli, "Emotion Recognition: The Role of Facial Movement and the Relative Importance of Upper and Lower areas of the Face," Journal of Personal and Social Psychology, 37:2049-2059, 1979.
Burt, P.J., "Multiresolution Techniques for Image Representation, Analysis, and `Smart` Transmission," SPIE Conf. 1199, Visual Communications and Image Processing IV, Philadelphia, Nov. 1989.
Burt, P.J., "Attention Mechanisms for Vision in a Dynamic World," IEEE, 1988.
Hampel et al. "Robust Statistics: The Approach Based on Influence Functions," John Wiley and Sons, New York, NY, 1986.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Apparatus and method for tracking facial motion through a sequen does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Apparatus and method for tracking facial motion through a sequen, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Apparatus and method for tracking facial motion through a sequen will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-278408

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.