Image analysis – Pattern recognition – Feature extraction
Reexamination Certificate
2000-04-07
2003-12-16
Mariam, Daniel G. (Department: 2621)
Image analysis
Pattern recognition
Feature extraction
C382S165000, C382S197000, C382S203000, C382S218000
Reexamination Certificate
active
06665439
ABSTRACT:
BACKGROUND OF THE INVENTION
1. Field of Application
The present invention relates to an image recognition method and an image recognition apparatus for use in an image recognition system, for extracting from a color image the shapes of objects which are to be recognized. In particular, the invention relates to an image recognition apparatus which provides a substantial improvement in edge detection performance when applied to images such as aerial photographs or satellite images which exhibit a relatively low degree of variation in intensity values.
2. Description of Prior Art
In the prior art, various types of image recognition apparatus are known, which are intended for various different fields of application. Typically, the image recognition apparatus may be required to extract from an image, such as a photograph, all objects having a shape which falls within some predetermined category.
One approach to the problem of increasing the accuracy of image recognition of the contents of photographs is to set the camera which takes the photographs in a fixed position and to fix the lighting conditions etc., so that the photographic conditions are always identical. Another approach is to attach markers, etc., to the objects which are to be recognized.
However in the case of recognizing shapes within satellite images or aerial photographs, such prior art methods of improving accuracy cannot be applied. That is to say, the photographic conditions such as the camera position, camera orientation, weather conditions, etc., will vary each time that a photograph is taken. Furthermore, a single image may contain many categories of image data, such as image data corresponding to building, rivers, streets, etc., so that the image contents are complex. As a result, the application of image recognition to satellite images or aerial photographs is extremely difficult.
To extract the shapes of objects which are to be recognized, from the contents of an image, image processing to detect edges etc., can be implemented by using the differences between color values (typically, the intensity, i.e., gray-scale values) of the pixels which constitute a region representing an object which is to be recognized and the color values of the pixels which constitute adjacent regions to these objects. Edge detection processing consists of detecting positions at which there are abrupt changes in the pixel values, and recognizing such positions as corresponding to the outlines of physical objects. Various types of edge detection processing are known. With a typical method, smoothing processing is applied overall to the pixel values, then each of the pixels for which the first derivative of the intensity variation gradient within the image reaches a local maximum and exceeds a predetermined threshold value are determined, with each such pixel being assumed to be located on an edge of an object in the image. Alternatively, a “zero-crossing” method can be applied, e.g., whereby the zero crossings of the second derivative of the gradient are be detected to obtain the locations of the edge pixels. With a template technique, predetermined shape templates are compared with the image contents to find the approximate positions of objects that are to be recognized, then edge detection processing may be applied to the results obtained.
Although prior art image recognition techniques are generally based upon intensity values of the pixels of an image, various methods are possible for expressing the pixel values of color image data. If the HSI (hue, saturation, intensity) color space is used, then any pixel can be specified in terms of the magnitude of its hue, saturation or intensity component. The RGB (red, green, blue) method is widely used for expressing image data, however transform processing can be applied to convert such data to HSI form, and edge detection processing can then be applied by operating on the intensity values which are thereby obtained. HSI information has the advantage of being readily comprehended by a human operator. In particular, an image can easily be judged by a human operator as having a relatively high or relatively low degree of variation in intensity (i.e., high contrast or low contrast).
Due to the difficulties which are experienced in the practical application of image recognition processing to satellite images or aerial photographs, it would be desirable to effectively utilize all of the color information that is available within such a photograph, that is to say, to use not only the intensity values of the image but also the hue and saturation information contained in the image. However in general with prior art types of edge detection processing, only parts of the color information, such as the intensity values alone, are utilized.
A method of edge detection processing is described in Japanese patent HEI 6-83962, which uses a zero-crossing method and, employing a HSI color space (referred to therein using the designations L,*C*ab,H*ab for the intensity, saturation and hue values respectively) attempts to utilize not only the intensity values but also hue and saturation information. In
FIG. 47
, diagrams
200
,
201
,
202
, and
203
show respective examples of the results of image recognition, applied to a color picture of an individual, which are obtained by using that method. Diagram
200
shows the result of edge detection processing that is applied using only the intensity values of each of the pixels of the original picture, diagram
201
shows the result of edge detection processing that is applied using only the hue values, and diagram
202
shows the result obtained by using only the saturation values. Diagram
203
shows the result that is obtained by combining the results shown in diagrams
200
,
201
and
203
. As can be seen, a substantial amount of noise arises in the image expressed by the saturation values, and this noise is inserted into the combined image shown in diagram
203
.
In some cases, image smoothing processing is applied in order to reduce the amount of noise within an image, before performing edge detection processing, i.e., the image is pre-processed by using a smoothing filter to blur the image, and edge detection processing applied to the resultant image.
In order to obtain satisfactory results from edge detection processing which is to be applied to an image such as a satellite images or aerial photograph, for example to accurately and reliably extract the shapes of specific objects such as roads, buildings etc., from the image contents, it is necessary not only to determine the degree of “strength” of each edge, but also the direction along which an edge is oriented. In the following, and in the description of embodiments of the invention and in the appended claims, the term “edge” is used in the sense of a line segment which is used as a straight-line approximation to a part of a boundary between two adjacent regions of a color image. The term “strength” of an edge is used herein to signify a degree of of color difference between pixels located adjacent to one side of that edge and pixels located adjacent to the opposite side, while the term “edge direction” is used in referring to the angle of orientation of an edge within the image, which is one of a predetermined limited number of angles. If the direction of an edge could be accurately determined (i.e., based upon only a part of the pixels which constitute that edge), then this would greatly simplify the process of determining all of the pixels which are located along that edge. That is to say, if the edge direction could be reliably determined estimated by using only a part of the pixels located on that edge, then it would be possible to compensate for any discontinuities within the edge which is obtained as a result of the edge detection processing, so that an output image could be generated in which all edges are accurately shown as continuous lines.
However with the method described in Japanese patent HEI 6-83962, only the zero-crossing method is used, so that it is not possible to determine ed
Lowe Hauptman & Gilman & Berner LLP
Mariam Daniel G.
LandOfFree
Image recognition method and apparatus utilizing edge... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Image recognition method and apparatus utilizing edge..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Image recognition method and apparatus utilizing edge... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3165556