System and method for face recognition using synthesized...

Image analysis – Applications – Personnel identification

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C345S419000

Reexamination Certificate

active

06975750

ABSTRACT:
A system and method that includes a virtual human face generation technique which synthesizes images of a human face at a variety of poses. This is preferably accomplished using just a frontal and profile image of a specific subject. An automatic deformation technique is used to align the features of a generic 3-D graphic face model with the corresponding features of these pre-provided images of the subject. Specifically, a generic frontal face model is aligned with the frontal image and a generic profile face model is aligned with the profile image. The deformation procedure results in a single 3-D face model of the specific human face. It precisely reflects the geometric features of the specific subject. After that, subdivision spline surface construction and multi-direction texture mapping techniques are used to smooth the model and endow photometric detail to the specific 3-D geometric face model. This smoothed and texturized specific 3-D face model is then used to generate 2-D images of the subject at a variety of face poses. These synthesized face images can be used to build a set of training images that may be used to train a recognition classifier.

REFERENCES:
patent: 5469512 (1995-11-01), Fujita et al.
patent: 5850470 (1998-12-01), Kung et al.
patent: 6525723 (2003-02-01), Deering
Deformable model based generation of realistic 3D specific human face, by Yan et al., IEEE, May 1998.
Illumination based image synthesis, by Georghiades et al., IEEE proceedings Jun. 26, 1999.
K. Mase, Recognition of Facial Expressions for Flow, IEICE Transactions, Special Issue on Computer Vision and Its Application, 1991, E74(1).
F.I. Park, Control Parameterization for Facial Animation, In N.M. Thalmann and D Thalmann, editors, Computer Animation, Sprinter-Verlag, 1991, 3-13.
Y. Wu, N.M. Thalmann, D. Thalmann, A Dynamic Wrinkle Model in Facial Animimation and Skin Aging. The Journal of Visualization and Computer Animation, vol. 6, 1995, 195-205.
K. Aizawa, H. Harashima, and T. Saito. Model-based Analysis Synthesis Image Coding (MBASIC) System for a Person's Face. Signal Processing: Image Communication, 1:139-152, 1989.
Yuille, A.L., Hallinan, P.W. & Cohen, D.S. Feature Extraction from Faces Using Deformable Templates, International Journal of Computer Vision, 1992,8 99-111.
Chow, G. & Li, X.B. Towards a System for Automatic Facial Feature Detection. Pattern Recognition, 26(12), 1993, 1739-1755.
Xie, X., Sudhaker, R. & Zhuang, H. On Improving Eye Feature Extraction Using Deformable Templates, Pattern Recognition, 27(6), 1994, 791-799.
Michael Turk, Alex Pentland, Eigenfaces for Recognition, Journal of Cognitive Neuroscience, 3(1): 71-86, 1991.
Alex Pentland, Baback Moghaddam and Thad Starner, View-Based and Modular Eigenspaces for Face Recognition, No. 245, Media Laboratory, Massachusetts Institute of Technology, 1994.
Jie Yan, Wen Gao, Baocai Yin. Generation of Realistic 3-D Human Face, Chinese Journal of Computers, 1999, 22(2): 147-153.
David Beymer and Tomaso Poggio, Face Recognition of One Example View, Massachusetts Institute of Technology.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

System and method for face recognition using synthesized... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with System and method for face recognition using synthesized..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and System and method for face recognition using synthesized... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3519072

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.