Efficient search for a gray-level pattern in an image using...

Image analysis – Image enhancement or restoration – Artifact removal or suppression

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C382S169000, C345S182000

Reexamination Certificate

active

06310983

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention is directed to a system for finding a gray-level pattern in an image.
2. Description of the Related Art
There has been a significant amount of work on the problem of locating the best transformation of a gray-level pattern in an image. A transformation, in a broad sense, is a movement of a pattern or image. For example, while recording a sequence of video images, an object may move causing its position in the recorded images to change as the video sequence progresses. One transformation of interest is a translation, which is defined as movement in two dimensions (e.g. x and y directions) without rotation. A more complicated transformation is the affine transformation, which includes translation, rotation, scaling and/or shear. With affine transformations, parallel lines in the pattern remain parallel even after being transformed.
The ability to locate the best transformation of a gray-level pattern in an image forms the basis of one of the components of MPEG encoding. It is also part of computer vision systems that are used to navigate robots, find parts automatically in an inventory or manufacturing facility, register images, track objects, etc.
One method for searching for the correct transformation of a pattern in an image is to determine every possible transformation, and to compare the pattern to the image at every transformation. The transformation with the lowest error is the actual transformation of the image. Because this method tests every transformation, it is slow and requires a lot of computer resources. Previous work to improve on the above-described method has concentrated on search methods that are less expensive, but may not find the best transformation.
For example, the computer vision community have experimented with methods that utilize the sum-of-squared-differences to compute intensity image patches; however, such work has concentrated on search methods that are not guaranteed to find the best corresponding patches. For example, pyramid-based systems work with multi-resolution representations of the image and the pattern, and match first at the coarsest resolution, then the next finer resolution in a smaller region around the coarser match, then the next finer resolution and so on. A mistake at the coarsest resolution can easily cause a large error in the final result.
Motion compensation for video compression has also been the focus of much investigation. The emphasis has been searching for a translation in an efficient manner, but not evaluating every translation. Again, the methods have not been guaranteed. That is, the previous work does not guarantee finding the best translation. Experimentation reveals that the previous work will not be accurate enough to correctly find the pattern in a new image.
The prior art attempts to improve on the traditional methods for finding a pattern in an image by reducing compute time at the expense of sacrificing accuracy. Therefore, a system is needed that can find a transformation of a gray-level pattern in an image that is faster than trying every transformation but more accurate than the prior art.
SUMMARY OF THE INVENTION
The present invention is directed to a system for finding a transformation of a gray level pattern in an image. The system recursively searches a transformation space in a multi-resolution manner. At each resolution, the transformation space is divided into groups of translations. For each group, a set of difference values are computed and compared against a previously known best difference value. If any of the computed difference values are greater than the previously known best difference value, the corresponding group of translations is removed from further consideration.
In one embodiment, the system divides the transformation space into a plurality of groups of translations and creates one or more image arrays. Each image array includes one or more sums of pixels in a region of the image. The system also creates one or more pattern arrays, one or more minimum arrays and one or more maximum arrays. Each pattern array includes one or more sums of pixels in a region of the pattern. The system determines a first difference value for each group of translations. Each first difference value is based on a first pattern array corresponding to a first image array, a first minimum array corresponding to the first image array, a first maximum array corresponding to the first image array. Translations in one or more groups having a first difference value greater than a previously determined best known difference value are discarded. Translations that have not been discarded are further investigated to determine which translation has an optimal difference value. The translation with the optimal difference value is the transformation of the gray level pattern in the image.
In one embodiment, the system also determines a second difference value for at least a subset of remaining groups that have not been discarded. Each second difference value is based on a second pattern array corresponds to a second image array, a second minimum array corresponding to the second image array, a second maximum array corresponding to the second image array. Translations in groups having a second difference value greater than the previously determined best known difference value are discarded. The system determines a third difference value for at least a subset of remaining groups. Each third difference value is based on a third pattern array corresponding to a third image array, a third minimum array corresponding to the third image array, a third maximum array corresponding to the third image array. Translations in one or more groups having a third difference value greater than the previously determined best known difference value are discarded.
In one embodiment the task of determining which translation includes the steps of subdividing a group of translations that has a first difference value less than or equal to the best known difference value, creating a new set of minimum arrays and maximum arrays based the new group size and repeating the steps of determining a first difference value, and discarding translations using the new set of minimum arrays and maximum arrays.


REFERENCES:
patent: 5101446 (1992-03-01), Resnikoff et al.
patent: 5134668 (1992-07-01), Appel
patent: 5265200 (1993-11-01), Edgar
patent: 5434931 (1995-07-01), Quardt et al.
patent: 5459495 (1995-10-01), Scheffer et al.
patent: 5528740 (1996-06-01), Hill et al.
patent: 5729625 (1998-03-01), Miyoke
W. Rucklidge, Efficient Visual Recognition Using the Hausdorff Distance, Lecture Notes in Computer Science, 1996.
P. Anandan, A Computational Framework and an Algorithm for the Measurement of Visual Motion, International Journal of Computer Vision 2, 1989, pp. 283-310.
Michael J. Black, Combining Intensity and Motion for Incremental Segmentation and Tracking Over Long Image Sequences, Computer Vision—ECCV '92, Second European Conference on Computer Vision, May 19-22, 1992, pp. 485-493.
Mei-Juan Chen, Liang-Gee Chen and Tzi-Dar Chiueh, One-Dimensional Full Search Motion Estimation Algorithm For Video Coding, IEEE Transactions on Circuits and Systems for Video Technology, vol. 4, No. 5, Oct. 1994, pp. 504-509.
Mei-Juan Chen, Liang-Gee Chen Tzi-Dar CHiueh and Yung-Pin Lee, A New Block-Matching Criterion for Motion Estimation and its Implementation, IEEE Transactions on Circuits and Systems for Video Technology, vol. 5, No. 3, Jun. 1995, pp. 231-236.
H. Gharavi and Mike Mills, Blockmatching Motion Estimation Algorithms—New Results, IEEE Transactions on Circuits and Systems, vol. 37, No. 5, May 1990, pp. 649-651.
Daniel P. Huttenlocher, Gregory A. Klanderman and William J. Rucklidge, Comparing Images Using the Hausdorff Distance, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 15, No. 9, Sep. 1993, pp. 850-863.
Daniel P. Huttenlocher, Jae J. Noh and William J. Rucklidge, Tracking Non-Rigid Objects in Complex Scenes,Fourth International Conference on Computer Vi

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Efficient search for a gray-level pattern in an image using... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Efficient search for a gray-level pattern in an image using..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Efficient search for a gray-level pattern in an image using... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2612423

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.