Image analysis – Pattern recognition – Template matching
Reexamination Certificate
2002-05-15
2004-05-11
Dastouri, Mehrdad (Department: 2623)
Image analysis
Pattern recognition
Template matching
C382S205000
Reexamination Certificate
active
06735336
ABSTRACT:
FIELD OF THE INVENTION
The present invention is directed to methods of and systems for information processing, information mapping, pattern recognition and image analysis in computer systems.
BACKGROUND
With the increasing proliferation of imaging capabilities, information transactions in computer systems increasingly require the identification and comparison of digital images. In addition to conventional viewable digital images, other types of information, both viewable and non-viewable, are subject to pattern analysis and matching. Image identification and pattern analysis/recognition is usually dependent on analysis and classification of predetermined features of the image. Accurately identifying images using a computer system is complicated by relatively minor data distorting the images or patterns resulting from changes caused when, for example, are shifted, rotated or otherwise deformed.
Object invariance is a field of visual analysis which deals with recognizing an object despite distortion such as that caused by shifting, rotation, other affine distortions, cropping, etc. Object invariance is used primarily in visual comparison tasks. Identification of a single object or image within a group of objects or images also complicates the image identification process. Selective attention, or “priming”, deals with how a visual object can be separated from its background or other visual objects comprising distractions.
Current pattern recognition, image analysis and information mapping systems typically employ a Bayesian Logic. Bayesian Logic predicts future events through the use of knowledge derived from prior events. In computer applications, Bayesian Logic relies on prior events to formulate or adjust a mathematical model used to calculate the probability of a specific event in the future. Without prior events on which to base a mathematical model, Bayesian Logic is unable to calculate the probability of a future event. Conversely, as the number of prior events increases, the accuracy of the mathematical model increases as does the accuracy of the resulting prediction from the Bayesian Logic approach.
Currently, two common paradigms accommodating some degree of distortion (i.e., image deformation) of a visual object under deformations; point-to-point mapping and high order statistics. Point-to-point, or matching with shape contexts, achieves measurement stability by identifying one or more sub-patterns with the overall patterns or images being compared. Once these sub-patterns are identified, the statistical features of sub-patterns are compared to determine agreement between the two images. Point-to-point mapping methodologies are further described in “Matching with Shape Contexts” by Serge Belongie and Jitendra Malik in June, 2000 during the IEEE Workshop On Content-based Access of Image and Video Libraries (CBAIVL). A second method of point-to-point mapping is what/where networks and assessments of lie groups of transformations based on back propagation networks. In this method an optimal transformation is identified for a feature in a first image and is used to compare the same feature in a second image. This approach deconstructs the image into a sum or a multiplicity of functions. These functions are then mapped to an appropriately deconstructed image function of a compared, or second image. What/where networks have been used by Dr. Rajesh Rao and Dana Ballard from the Salk Institute in La Hoya, Calif. The point-to-point mapping techniques described attempt to map a test or input image to a reference or target image that is either stored in memory directly or is encoded into memory. The point-to-point approach achieves limited image segmentation and mappings through the use of a statistical approach.
In the high order statistical approach both the original input image and the compare target image are mapped into a high dimensional space and statistical measurements are performed on the images in the high dimensional space. These high order statistical measurements are compared to quantity an amount of agreement between the two images indicative of image similarity. This approach is used by Support Vector Machines, High Order Clustering (Hava Siegelmann and Hod Lipson) and Tangent Distance Neural Networks (TDNN). Support Vector Machines are described by Nello Christianini and John Shawe-Taylor ISBN 0-521-78019-5.
Both the point-to-point mapping and the high order statistics approach have been used in an attempt to recognize images subject to various transformations due to shifting, rotation and other deformations of the subject. These approaches are virtually ineffective for effectively isolating a comparison object (selective attention) from the background or other visual objects.
In contrast to these two common paradigms, the human brain may compare two objects or two patterns using “insight” without the benefit of prior knowledge of the objects or the patterns. A Gestalt approach to comparing objects or comparing patterns attempts to include the concept of insight by focusing on the whole object rather than individual portions of the object. Gestalt techniques have not been applied to computer systems to perform pattern recognition, image analysis or information mapping. Gestalt mapping is further described in Vision Science-Photons to Phenomenology by Stephen E. Plamer, ISBN 0-262-16183-4.
SUMMARY
According to one aspect of the present invention, a method of comparing an input pattern with a memory pattern comprises the steps of loading a representation of said input pattern into cells in an input layer; loading a representation of said memory pattern into cells in a memory layer; loading an initial value into cells in an intermediate layers between said input layer and said memory layer; comparing values of cells in said intermediate layers with values stored in cells of adjacent layers; updating values stored in cells in said intermediate layers based on said step of comparing; and mapping cells in said memory layer to cells in said input layer.
REFERENCES:
patent: 6400828 (2002-06-01), Covell et al.
patent: 6571227 (2003-05-01), Agrafiotis et al.
patent: 2002/0099675 (2002-07-01), Agrafiotis et al.
patent: 2003/0035573 (2003-02-01), Duta et al.
patent: WO 97/04400 (1997-02-01), None
Martin Beckerman, “Adaptive Cooperative Systems”, 1997, pp. 242-248 and pp. 286-290.
Hava T. Siegelmann, “Neural Networks and Analog Computation Beyond the Turing Limit” Chapter 12: Computation Beyond the Turing Limit, 1999, pp. 153-163.
L. Prasad & S.S. Iyengar, “Wavelet Analysis with Applications to Image Processing”, 1997, pp. 90-100 and pp. 105-112.
Stephen E. Palmer, “Vision Science Photons to Phenomenology”, Chapter 4: Processing Image Structure, 1999, pp. 146-197.
Haili Chui and Anand Rangarajan, “A New Point Matching Algorithm for Non-Rigid Registration”, Oct. 2002, pp. 1-32.
“Principles of Dielectrophoresis and Electrorotation”, pp. 1-2.
Anand Rangarajan, Haili Chui, Eric Mjolsness, “A New Distance Measure for Non-Rigid Image Matching”, pp. 1-16.
Haili Chui and Anand Rangarajan, “A New Algorithm for Non-Rigid Point Matching”, pp. 1-8.
Anand Rangarajan, Haili Chui, Eric Mjolsness, “A Relationship Between Spline-based Deformable Models and Weighted Graphs in Non-Rigid Matching”, pp. 1-8.
Avni Yossi
Suchard Eytan
Applied Neural Computing Ltd.
Dastouri Mehrdad
Fulbright & Jaworski L.L.P.
LandOfFree
Apparatus for and method of pattern recognition and image... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Apparatus for and method of pattern recognition and image..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Apparatus for and method of pattern recognition and image... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3218543