Image analysis – Learning systems
Reexamination Certificate
2002-05-13
2003-09-30
Johns, Andrew W. (Department: 2621)
Image analysis
Learning systems
C382S181000
Reexamination Certificate
active
06628821
ABSTRACT:
FIELD OF THE INVENTION
The present invention is directed to data analysis, such as audio analysis, image analysis and video analysis, and more particularly to the estimation of hidden data from observed data. For image analysis, this hidden data estimation involves the placement of control points on unmarked images or sequences of images to identify corresponding fiduciary points on objects in the images.
BACKGROUND OF THE INVENTION
Some types of data analysis and data manipulation operations require that “hidden” data first be derived from observable data. In the field of speech analysis, for example, one form of observable data is pitch-synchronous frames of speech samples. To perform linear predictive coding on a speech signal, the pitch-synchronous frames are labeled to identify vocal-tract positions. The pitch-synchronous data is observable in the sense that it is intrinsic to the data and can be easily derived using known signal processing techniques simply by the correct alignment between the speech sample and a frame window. In contrast, the vocal tract positions must be estimated either using some extrinsic assumptions (such as an acoustic waveguide having uniform length sections with each section of constant width) or using a general modeling framework with parameter values derived from an example database (e.g. linear manifold model with labeled data). Therefore, the vocal tract positions are known as “hidden” data.
In image processing applications, the observable data of an image includes attributes such as color or grayscale values of individual pixels, range data, and the like. In some types of image analysis, it is necessary to identify specific points in an image that serve as the basis for identifying object configurations or motions. For example, in gesture recognition, it is useful to identify the locations and motions of each of the figures. Another type of image processing application relates to image manipulation. For example, in image morphing, where one image transforms into another image, it is necessary to identify points of correspondence in each of the two images. If an image of a face is to morph into an image of a different face, for example, it may be appropriate to identify points in each of the two images that designate the outline and tip of the nose, the outlines of the eyes and the irises, the inner and outer boundaries of the mouth, the tops and bottoms of the upper and lower teeth, the hairline, etc. After the corresponding points in the two images have been identified, they serve as constraints for controlling the manipulation of pixels during the transform from one image to the other.
In a similar manner, control points are useful in video compositing operations, where a portion of an image is incorporated into a video frame. Again, corresponding points in the two images must be designated, so that the incorporated image will be properly aligned and scaled with the features of the video frame into which it is being incorporated. These control points are one form of hidden data in an image.
In the past, the identification of hidden data, such as control points in an image, was typically carried out on a manual basis. In most morphing processes, for example, a user was required to manually specify all of the corresponding control points in the beginning and ending images. If only two images are involved, this requirement is somewhat tedious, but manageable. However, in situations involving databases that contain a large number of images, the need to manually identify the control points in each image can become quite burdensome. For example, U.S. Pat. No. 5,880,788 discloses a video manipulation system in which images of different mouth positions are selected from a database and incorporated into a video stream, in synchrony with a soundtrack. For optimum results, control points which identify various fiduciary points on the image of a person's mouth are designated for each frame in the video, as well as each mouth image stored in the database. These control points serve as the basis for aligning the image of the mouth with the image of a person's face in the video frame. It can be appreciated that manual designation of the control points for all of the various images in such an application can become quite cumbersome.
Most previous efforts at automatically recognizing salient components of an image have concentrated on features within the image. For example, two articles entitled “View-Based and Modular Eigenspaces for Face Recognition,” Pentland et al,
Proc. IEEE ICCVPR
'94, 1994, and “Probabilistic Visual Learning for Object Detection,” Moghaddam et al,
Proc. IEEE CVPR
, 1995, disclose a technique in which various features of a face, such as the nose, eyes, and mouth, can be automatically recognized. Once these features have been identified, an alignment point is designated for each feature, and the variations of the newly aligned features from the expected appearances of the features can be used for recognition of a face.
While this technique is useful for data alignment in applications such as face recognition, it does not by itself provide a sufficient number of data points for image manipulation techniques, such as morphing and image compositing, or other types of image processing which rely upon the location of a large number of specific points, such as general gesture or expression recognition.
Other prior art techniques for determining data points from an image employ active contour models or shape-plus-texture models. Active contour models, also known as “snakes”, are described in M. Kass, A. Witkin, D. Terzopoulous, “Snakes, Active Contour Models.”
IEEE International Conference on Computer Vision
, 1987, and C. Bregler and S. Omohundro, “Surface Learning with Applications to Lipreading,”
Neural Information Processing Systems
, 1994. The approaches described in these references use a relaxation technique to find a local minimum of an “energy function”, where the energy function is the sum of an external energy term, determined from the grayscale values of the image, and an internal energy term, determined from the configuration of the snake or contour itself. The external energy term typically measures the local image gradient or the local image difference from some expected value. The internal energy term typically measures local “shape” (e.g. curvature, length). The Bregler and Omohundro reference discloses the use of a measure of distance between the overall shape of the snake to the expected shapes for the contours being sought as an internal energy term.
Snakes can easily be thought of as providing control point locations, and the extension to snakes taught by the Bregler et al reference allows one to take advantage of example-based learning to constrain the estimated locations of these control points. However, there is no direct link between the image appearance and the shape constraints. This makes the discovery of “correct” energy function an error-prone process, which relies heavily on the experience of the user and on his familiarity with the problem at hand. The complete energy function is not easily and automatically derived from data-analysis of an example training set.
Shape-plus-texture models are described in A. Lanitis, C. J. Taylor, T. F. Cootes, “A Unified Approach to Coding and Interpreting Face Images,”
International Conference on Computer Vision
, 1995, and D. Beymer, “Vectorizing Face Images by Interleaving Shape and Texture Computations,”
A.I. Memo
1537. Shape-plus-texture models describe the appearance of an object in an image using shape descriptions (e.g. contour locations or multiple point locations) plus a texture description, such as the expected grayscale values at specified offsets relative to the shape-description points. The Beymer reference discloses that the model for texture is example-based, using an affine manifold model description derived from the principle component analysis of a database of shape-free images (i.e. the images are pre-warped to align their shape des
Covell Michele
Slaney Malcolm
Interval Research Corporation
Johns Andrew W.
Nakhjavan Shervin
Van Pelt & Yi LLP
LandOfFree
Canonical correlation analysis of image/control-point... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Canonical correlation analysis of image/control-point..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Canonical correlation analysis of image/control-point... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3026034