Image recognizing apparatus

Image analysis – Pattern recognition – Feature extraction

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C382S112000, C382S151000, C382S181000, C382S218000, C382S286000, C382S289000, C382S291000, C382S294000, C382S295000, C358S405000, C358S406000, C358S504000

Reexamination Certificate

active

06333997

ABSTRACT:

BACKGROUND OF THE INVENTION
This invention relates to an image recognizing apparatus, for use in a print inspecting apparatus for inspecting printed states of printed matters, for recognizing images on the printed matters conveyed.
In an image recognizing apparatus for picking up, as image data, images on conveyed to-be-recognized objects (e.g. printed matters), and comparing them with a reference image to identify them, it is necessary to accurately align the reference image with an image on each to-be-recognized object in order to estimate the degree of correspondence therebetween. However, in the image recognizing apparatus, the to-be-recognized object skews or slides while it is conveyed, and different objects to be recognized assume different conveyance states.
In light of the above, it is necessary to accurately detect to-be-recognized objects one by one. In the case of a recognition section incorporated in the conventional print apparatus, a detection method has been proposed, in which detection and positioning are performed with reference to a mark (“+” which is called a “register mark”, or a line) printed on a medium together with a to-be-recognized object. Further, where the format of a to-be-recognized object is known as in the case of a document, journal or a driver's license, a horizontal rule in the to-be-recognized object is sometimes used as a reference for positioning.
In addition, Japanese Patent Application KOKAI Publication No. 6-318246 discloses a method used in a case where there is no reference within a to-be-recognized object. Specifically it discloses a method for correcting and positioning an image on the basis of information output from a sensor for detecting the conveyance state of a to-be-recognized object.
As described above, there has been employed a method for printing a positioning reference mark as well as a to-be-recognized object, a method for performing positioning using a horizontal rule within a to-be-recognized object, or a method which uses a sensor for sensing a conveyance state. These methods, however, have disadvantages as described below.
In the method for printing a positioning reference mark as well as a to-be-recognized object, it is necessary to previously print, as well as a to-be-recognized object, a pattern as the reference mark outside the to-be-recognized object. Moreover, it is necessary to provide a cutting mechanism for removing the reference mark pattern after printing.
The method for performing positioning using a horizontal rule within a to-be-recognized object uses a pattern within an object and is therefore free from the aforementioned problem. However, this method cannot be applied to a to-be-recognized object which does not include such a reference horizontal rule.
Furthermore, a method which uses a sensor for sensing a conveyance state requires such a conveyance state sensing sensor in addition to a sensor for picking up an image, which means that the method requires a large scale apparatus.
BRIEF SUMMARY OF THE INVENTION
It is the object of the invention to provide an image recognizing apparatus capable of accurately detecting and recognizing to-be-recognized objects even if the positions of input images of the objects vary one by one.
A line sensor reads a to-be-recognized object conveyed, and stores image data indicative of the object in an image memory. A parameter input section inputs, as parameters, the luminance difference between the object and its background, conveyance conditions, etc. An endpoint detecting section detects the border between the image of the object stored in the image memory and the background, on the basis of pairs of endpoints that are located on vertical and horizontal lines passing through the to-be-recognized object. At this time, the endpoint detecting section uses the parameters to detect a plurality of pairs of endpoints without being influenced by external noise or a stain on the object.
A length detecting section detects vertical and horizontal lengths of the to-be-recognized object, and a position determining section determines the four corners and the center of the entire to-be-recognized object, using the detected lengths. A recognizing section compares the image input by the image input section with reference image data stored in a reference image data memory, thereby recognizing an image on the object.
The position determining section includes an extracting section for extracting a partial area of the to-be-recognized object on the basis of the corner coordinates judged by a corner judging section. The recognizing section includes a section for comparing the partial area extracted by the extracting section, with a corresponding area in the reference image, thereby recognizing the image input by the image input section.
Furthermore, the recognizing section includes a determining section for comparing lengths supplied from the length detecting section with a distance between each pair of endpoints supplied from the endpoint detecting section, extracting effective endpoint pair information, and determining, on the basis of the number of the extracted pairs of endpoints, whether or not the position of the entire to-be-recognized object can be detected; an accumulating section for accumulating endpoint pair information concerning the to-be-recognized object if the determining means determines that the position of the to-be-recognized object cannot be detected; a display section for displaying a determination result of the determining means if the result indicates that the position of the entire to-be-recognized object cannot be detected; and an abnormal portion estimating section for estimating an abnormal portion of the image recognizing apparatus on the basis of the endpoint information accumulated by the accumulating section.
When in the apparatus of the invention, cases where the to-be-recognized object cannot be detected have occurred continuously, the apparatus warns the user and estimates an abnormal portion thereof. Thus, an abnormal portion, if any, can be detected at an early stage.
In addition, the conveyance state of each to-be-recognized object is judged, and the parameters are changed on the basis of the judgment result. Accordingly, the to-be-recognized object can be detected more accurately.
Additional objects and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objects and advantages of the invention may be realized and obtained by means of the instrumentalities and combinations particularly pointed out hereinafter.


REFERENCES:
patent: 5216724 (1993-06-01), Suzuki et al.
patent: 5253306 (1993-10-01), Nishio
patent: 5371810 (1994-12-01), Vaidianathan
patent: 5377279 (1994-12-01), Hanafusa et al.
patent: 5680471 (1997-10-01), Kanebako et al.
patent: 5892854 (1999-04-01), de Queiroz et al.
patent: 5956414 (1999-09-01), Grueninger
patent: 5999646 (1999-12-01), Tamagaki
patent: 0 342 060 B1 (1994-08-01), None
patent: 0704 821A2 (1996-04-01), None
patent: 6-318246 (1994-11-01), None

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Image recognizing apparatus does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Image recognizing apparatus, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Image recognizing apparatus will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2586730

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.