Television – Special applications – Flaw detector
Reexamination Certificate
2000-01-18
2003-04-01
Le, Vu (Department: 2613)
Television
Special applications
Flaw detector
Reexamination Certificate
active
06542180
ABSTRACT:
BACKGROUND OF THE INVENTION
1. Field of the Invention
This invention relates to lighting systems for vision systems.
2. Description of Related Art
The light output of any device is a function of many variables. Some of the variables include the instantaneous drive current, the age of the device, the ambient temperature, whether there is any dirt or residue on the light source, the performance history of the device, etc. Machine vision instrument systems typically locate objects within their field of view using methods which may determine, among other things, the contrast within the region of interest where the objects may be found. To some degree, this determination is significantly affected by the amount of incident light or transmitted light.
Automated video inspection metrology instruments generally have a programming capability that allows an event sequence to be defined by the user. This can be implemented either in a deliberate manner, such as programming, for example, or through a recording mode which progressively learns the instrument sequence. The sequence commands are stored as a part program. The ability to create programs with instructions that perform a sequence of instrument events provides several benefits.
For example, more than one workpiece or instrument sequence can be performed with an assumed level of instrument repeatability. In addition, a plurality of instruments can execute a single program, so that a plurality of inspection operations can be performed simultaneously or at a later time. Additionally, the programming capability provides the ability to archive the operation results. Thus, the testing process can be analyzed and potential trouble spots in the workpiece or breakdowns in the controller can be identified. Without adequate standardization and repeatability, archived programs vary in performance over time and within different instruments of the same model and equipment.
Conventionally, as illustrated in U.S. Pat. No. 5,753,903 to Mahaney, closed-loop control systems are used to ensure that the output light intensity of a light source of a machine vision system was driven to a particular command level. Thus, these conventional closed-loop control systems prevent the output light intensity from drifting from the desired output light intensity due to variations in the instantaneous drive current, the age of the light source, the ambient temperature, or the like.
SUMMARY OF THE INVENTION
This invention is especially useful for producing reliable and repeatable results when using predetermined commands to the illumination system, such as when the command is included in a part-program that will be used on a different vision system, and/or on the same or a different vision system at a different time or place, or to view parts having variable surface characteristics.
In these conventional closed-loop control systems, the output light intensity of the light sources is driven to the particular command level regardless of the resulting quality of the illumination of the part obtained by driving the light source to the particular command level. However, even if the output light intensity of the light source is controlled to a particular command level, the resulting image captured by the machine vision system may not have a desired image quality or characteristic.
Similarly, U.S. patent application Ser. No. 09/425,990, filed Dec. 30, 1999, discloses a system for calibrating the output intensity of a light source so that the actual output light intensity of the light source in operation corresponds to the desired output light intensity based on the particular input light intensity command value. Again, even though the output light intensity may be at the desired value, the image quality or characteristic of the resulting captured image may be insufficient.
Additionally, input light settings in many vision systems often do not correspond to fixed output light intensities. Moreover, the output light intensity can not be measured directly by the user. Rather, the output light intensity is measured indirectly by measuring the brightness of the image. In general, the brightness of the image is the average gray level of the image. Alternatively, the output light intensity may be measured directly using specialized instruments external to a particular vision system.
In any case, the lighting behavior, i.e., the relationship between the measured output light intensity and the commanded light intensity, is not consistent between vision systems, within a single vision system over time or between parts being viewed. Rather, the relationship between the measured output light intensity and the commanded light intensity depends on the optic elements of the vision system, the particular light source being used to illuminate a part, the particular bulb of that light source, the particular part being viewed and the like.
This inconsistency of the lighting behavior makes it difficult to correctly run part-programs even within a single run of parts on a single machine. That is, a part program with a fixed set of commanded light intensity values might produce images of varying brightness when illuminating different pieces of a single part. However, many measurement algorithms, such as algorithms using edge detection, depend on the brightness of the image. As a result, because the brightnesses of resulting images generated when viewing different parts are often different, part programs do not run consistently.
This invention provides systems, methods and graphical user interfaces that adjust the light intensity of a vision system to obtain a desired image quality or characteristic of a captured image.
This invention separately provides systems, methods and graphical user interfaces that define a plurality of regions of interest within a captured image to be used to determine the image quality or characteristic of the captured image resulting from a current illumination level.
This invention separately provides systems, methods and graphical user interfaces that allow a user to easily and quickly define multiple regions of interest that can be used to determine the image quality of a captured image.
In various and exemplary embodiments of the systems, methods and graphical user interfaces of this invention, a user can invoke a multi area image quality tool to define two or more regions of interest within a captured image of a part. This multi area image quality tool is invoked during generation of a part program useable to illuminate the part and to capture images of the part for subsequent analysis. The multi area image quality tool allows the user to specify the location, orientation, size and separation of the two or more regions of interest. The multi area image quality tool also allows the user to specify one or more of the light sources to be used to illuminate the part and possibly the angle from which the illumination is provided, the operational mode of the lighting sources of the vision system, the image quality to be measured within the two or more regions of interest and how the image portions within the region of interest are to be analyzed, including either a goal to be reached for the selected image quality, and/or a range of acceptable values for the selected image quality.
In operation of the part program which includes an instance of the multi area image quality tool, the part is illuminated based on the illumination commands contained within the part program immediately preceding that instance of the multi area image quality tool. In particular, either the closed-loop control systems disclosed in, for example, the 903 patent, or the calibration systems and methods disclosed in the incorporated 104751 application, can be used to ensure the initial illumination level correctly corresponds to the desired illumination level. The vision system then captures an image of the part using this initial illumination level. Portions of the image are extracted based on the defined regions of interest. The image quality of the captured image is determined based on an analy
Tessadro Ana M.
Wasserman Richard M.
Le Vu
Mitutoyo Corporation
Oliff & Berridg,e PLC
LandOfFree
Systems and methods for adjusting lighting of a part based... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Systems and methods for adjusting lighting of a part based..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Systems and methods for adjusting lighting of a part based... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3114547