Efficient search for a gray-level pattern in an image

Image analysis – Histogram processing – With a gray-level transformation

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C345S182000

Reexamination Certificate

active

06249603

ABSTRACT:

CROSS-REFERENCE TO RELATED APPLICATIONS
This Application is related to the following Application:
Efficient Search for a Gray-level Pattern In An Image Using Ranges of Sums, by William J. Rucklidge, Ser. No. 09/097,724, filed the same day as the present application.
This related Application is incorporated herein by reference.
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention is directed to a system for finding a gray-level pattern in an image.
2. Description of the Related Art
There has been a significant amount of work on the problem of locating the best transformation of a gray-level pattern in an image. A transformation, in a broad sense, is a movement of a pattern or image. For example, while recording a sequence of video images, an object may move causing its position in the recorded images to change as the video sequence progresses. One transformation of interest is a translation, which is defined as movement in two dimensions (e.g. x and y directions) without rotation. A more complicated transformation is the affine transformation, which includes translation, rotation, scaling and/or shear. With affine transformations, parallel lines in the pattern remain parallel even after being transformed.
The ability to locate the best transformation of a gray-level pattern in an image forms the basis of one of the components of MPEG encoding. It is also part of computer vision systems that are used to navigate robots, find parts automatically in an inventory or manufacturing facility, register images, track objects, etc.
One method for searching for the correct transformation of a pattern in an image is to determine every possible transformation, and to compare the pattern to the image at every transformation. The transformation with the lowest error is the actual transformation of the image. Because this method tests every transformation, it is slow and requires a lot of computer resources. Previous work to improve on the above-described method has concentrated on search methods that are less expensive, but may not find the best transformation.
For example, the computer vision community have experimented with methods that utilize the sum-of-squared-differences to compute intensity image patches; however, such work has concentrated on search methods that are not guaranteed to find the best corresponding patches. For example, pyramid-based systems work with multi-resolution representations of the image and the pattern, and match first at the coarsest resolution, then the next finer resolution in a smaller region around the coarser match, then the next finer resolution and so on. A mistake at the coarsest resolution can easily cause a large error in the final result.
Motion compensation for video compression has also been the focus of much investigation. The emphasis has been searching for a translation in an efficient manner, but not evaluating every translation. Again, the methods have not been guaranteed. That is, the previous work does not guarantee finding the best translation. Experimentation reveals that the previous work will not be accurate enough to correctly find the pattern in a new image.
The prior art attempts to improve on the traditional methods for finding a pattern in an image by reducing compute time at the expense of sacrificing accuracy. Therefore, a system is needed that can find a transformation of a gray-level pattern in an image that is faster than trying every transformation but more accurate than the prior art.
SUMMARY OF THE INVENTION
The present invention, roughly described, provides for a system for determining a transformation of a gray-level pattern in an image. In one embodiment, the pattern is identified in a first image. The system then receives a second image and finds a transformation of the pattern in the second image. When finding transformations in a video sequence, the system can update the pattern based on the newly found pattern in a newly received image, thus taking into account any changes in environment (e.g. light, brightness, rotation, change in shape, etc.).
Finding a transformation includes dividing a set of transformations into groups of transformations, each group having a reference transformation, and determining whether a set of pixel intensities of the pattern are within minimum values and maximum values. The minimum values represent minimum pixel intensities for neighborhoods of pixels in the image and the maximum values represent maximum pixel intensities for neighborhoods of pixels in the image. The step of determining is performed for each reference transformation from the first set. A difference function is applied based on the step of determining whether a set of pixel intensities are within the minimum and maximum values. A reference transformation which meets a predefined set of criteria with respect to the difference function is identified. In one embodiment, the predefined criteria is the reference transformation having the lowest difference value.
In one embodiment, the system continuously divides a transformation space for the image into smaller groups until a minimum group size is considered. Each of the groups belong to a resolution level. The step of dividing includes choosing at least one group for dividing in at least a subset of the levels. The step of choosing also includes determining a difference value for a subset of groups based on whether a set of pixel intensities for the pattern are within minimum values and maximum values. The system then backtracks through the levels and removes groups from consideration having a difference value worst than a best known difference value and performing the step of continuously dividing on any group having a difference value better than the best known value. Subsequent to completing the step of backtracking, the best known difference value corresponds to the best known transformation of the gray-level pattern in the image.
The present invention can be implemented using hardware, software or a combination of hardware and software. If the invention is implemented with hardware, a general purpose computer, a specifically designed computer or a specifically designed circuit can be used. If the invention is implemented with software, the software can be stored on any appropriate storage medium including RAM, ROM, CD-ROM, floppy disks, hard disks, non-volatile memory and other memory devices.
These and other objects and advantages of the invention will appear more clearly from the following detailed description in which the preferred embodiment of the invention has been set forth in conjunction with the drawings.


REFERENCES:
patent: 5410617 (1995-04-01), Kidd et al.
patent: 5694224 (1997-12-01), Tai
patent: 5745173 (1998-04-01), Edwards et al.
patent: 6026179 (2000-02-01), Brett
W. Rucklidge,Efficient Visual Recognition Using the Hausdorff Distance, Lecture Notes in Computer Science,1996.
P. Anandan, A Computational Framework and an Algorithm for the Measurement of Visual Motion, International Journal of Computer Vision 2, 1989, pp. 283-310.
Michael J. Black,Combining Intensity and Motion for Incremental Segmentation and Tracking Over Long Image Sequences, Computer Vision-ECCV 92, Second European Conference on Computer Vision, May 19-22, 1992, pp. 485-493.
Mei-Juan Chen, Liang-Gee Chen and Tzi-Dar Chiueh,One-Dimensional Full Search Motion Estimation Algorithm For Video Coding, IEEE Transactions on Circuits and Systems for Video Technology, vol. 4, No. 5, Oct. 1994, pp. 504-509.
Mei-Juan Chen, Liang-Gee Chen Tzi-Dar Chiueh and Yung-Pin Lee,A New Block-Matching Criterion for Motion Estimation and its Implementation, IEEE Transactions on Circuits and Systems for Video Technology, vol. 5, No. 3, Jun. 1995, pp. 231-236.
H. Gharavi and Mike Mills,Blockmatching Motion Estimation Algorithms—New Results, IEEE Transactions on Circuits and Systems, vol. 37, No. 5, May 1990, pp. 649-651.
Daniel P. Huttenlocher, Gregory A. Klanderman and William J. Rucklidge,Comparing Images Using the Hausdorff Distance, IEEE Transactions on Pattern Analysis and Machine Intel

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Efficient search for a gray-level pattern in an image does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Efficient search for a gray-level pattern in an image, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Efficient search for a gray-level pattern in an image will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2489266

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.