Method and apparatus for identifying scale invariant...

Image analysis – Pattern recognition – Template matching

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C382S220000

Reexamination Certificate

active

06711293

ABSTRACT:

FIELD OF THE INVENTION
This invention relates to object recognition and more particularly to identifying scale invariant features in an image and use of same for locating an object in an image.
BACKGROUND OF THE INVENTION
With the advent of robotics and industrial automation, there has been an increasing need to incorporate computer vision systems into industrial systems. Current computer vision techniques generally involve producing a plurality of reference images which act as templates and comparing the reference images against an image under consideration, to determine whether or not the image under consideration matches one of the reference images. Thus, comparisons are performed on a full image basis. Existing systems, however, are generally accurate in only two dimensions and generally require that a camera acquiring an image of an object must be above the object or in a predetermined orientation to view the object in two dimensions. Similarly, the image under consideration must be taken from the same angle. These constraints impose restrictions on how computer vision systems can be implemented, rendering such systems difficult to use in certain applications. What would be desirable therefore is a computer vision system which is operable to determine the presence or absence of an object, in an image taken from virtually any direction, and under varying lighting conditions.
SUMMARY OF THE INVENTION
The present invention addresses the above need by providing a method and apparatus for identifying scale invariant features in an image and a further method and apparatus for using such scale invariant features to locate an object in an image. In particular, the method and apparatus for identifying scale invariant features may involve a processor circuit for producing a plurality of component subregion descriptors for each subregion of a pixel region about pixel amplitude extrema in a plurality of difference images produced from the image. This may involve producing a plurality of difference images by blurring an initial image to produce a blurred image and by subtracting the blurred image from the initial image to produce the difference image. Successive blurring and subtracting may be used to produce successive difference images, where the initial image used in a successive blurring function includes a blurred image produced in a predecessor blurring function.
Having produced difference images, the method and apparatus may further involve locating pixel amplitude extrema in the difference images. This may be done by a processor circuit which compares the amplitude of each pixel in an image under consideration, with the amplitudes of pixels in an area about each pixel in the image under consideration to identify local maximal and minimal amplitude pixels. The area about the pixel under consideration, may involve an area of pixels in the same image and an area of pixels in at least one adjacent image such as a predecessor image or a successor image, or both.
The method and apparatus may further involve use of a processor circuit to produce a pixel gradient vector for each pixel in each difference image and using the pixel gradient vectors of pixels near an extremum to produce an image change tendency vector having an orientation, the orientation being associated with respective maximal and minimal amplitude pixels in each difference image.
The plurality of component subregion descriptors may be produced by the processor circuit by defining regions about corresponding maximal and minimal amplitude pixels in each difference image and defining subregions in each of such regions.
By using the pixel gradient vectors of pixels within each subregion, the magnitudes of vectors at orientations within predefined ranges of orientations can be accumulated for each subregion. These numbers represent subregion descriptors, describing scale invariant features of the reference image. By taking images of objects from different angles and under different lighting conditions, and using the above process, a library of scale invariant features of reference objects can be produced.
In accordance with another aspect of the invention, there is provided a method and apparatus for locating an object in an image. A processor is used to subject an image under consideration to the same process as described above as applied to the reference image to produce a plurality of scale invariant features or subregion descriptors associated with the reference image. Then, scale invariant features of the image under consideration are correlated with scale invariant features of reference images depicting known objects and detection of an object is indicated when a sufficient number of scale invariant features of the image under consideration define an aggregate correlation exceeding a threshold correlation with scale invariant features associated with the object.
Consequently, in effect, correlating involves the use of a processor circuit to determine correlations between component subregion descriptors for a plurality of subregions of pixels about pixel amplitude extrema in a plurality of difference images produced from the image, and reference component descriptors for a plurality of subregions of pixels about pixel amplitude extrema in a plurality of difference images produced from an image of at least one reference object in a reference image.
Correlating may be performed by the processor circuit by applying the component subregion descriptors and the reference component descriptors to a Hough transform. The Hough transform may produce a list of reference component descriptors of objects within the image under consideration and a list of matching reference component descriptors from the library of scale invariant features. These lists may be applied to a least squares fit algorithm, which attempts to identify a plurality of best fitting reference component descriptors identifying one of the likely objects. Having found the best fitting subregion descriptors, the image from which the reference component descriptors were produced may be readily identified and consequently the scale and orientation and identification of the object associated with such reference component descriptors may be determined to precisely identify the object, its orientation, its scale and its location in the image under consideration.


REFERENCES:
patent: 3069654 (1962-12-01), Hough
patent: 4907156 (1990-03-01), Doi et al.
patent: 5119444 (1992-06-01), Nishihara
patent: 5436979 (1995-07-01), Gray et al.
patent: 5598481 (1997-01-01), Nishikawa et al.
patent: 5617459 (1997-04-01), Makram-Ebeid et al.
patent: 5666441 (1997-09-01), Rao et al.
patent: 5764802 (1998-06-01), Simon
Ballard, D.H., “Generalizing the Hough transform to detect arbitrary shapes”, Pattern Recognition, 13, 2 (1981). pp. 111-122.
Crowley, James L., and Alice C. Parker, “A representation for shape based on peaks and ridges in the difference of low-pass transform”, IEEE Trans. on Pattern Analysis and Machine Intelligence, 6, 2 (1984), pp. 156-170.
Schmid, C., and R. Mohr, “Local grayvalue invariants for image retrieval”, IEEE PAMI, 19, 5 (1997), pp. 530-535.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Method and apparatus for identifying scale invariant... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Method and apparatus for identifying scale invariant..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and apparatus for identifying scale invariant... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3287615

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.