Image analysis – Pattern recognition – Template matching
Reexamination Certificate
1996-10-31
2003-10-21
Johnson, Timothy M. (Department: 2625)
Image analysis
Pattern recognition
Template matching
C382S312000
Reexamination Certificate
active
06636635
ABSTRACT:
BACKGROUND OF THE INVENTION
The present invention relates to a method of extracting a target object from an image sensed by an image sensing apparatus, a method of cutting out the object, a database structure used in extraction and a method of creating the database, and an image sensing apparatus or an image sensing system that can obtain object information using these methods. The present invention also relates to a storage medium which provides a program and data to the image sensing apparatus or image sensing system or stores the database.
As a technique for discriminating the presence/absence of a specific object in an image, or searching a database for an image including a specific object and extracting the image, a pattern recognition technique is used. Methods of applying a pattern recognition technique upon executing the pattern recognition include the following methods.
More specifically, in the first method, an image is segmented into a plurality of regions in advance and cutting processing is performed so that only a specific region to be recognized remains. Thereafter, similarity with a standard pattern is calculated using various methods.
In the second method, a template prepared in advance is scanned to calculate the degree of matching (correlation coefficient) at the respective positions to search for a position where the calculated value becomes equal to or larger than a predetermined threshold value (Japanese Patent Laid-Open No. 6-168331).
Furthermore, in the third method, upon creating an image database, regions of constituting elements and constituting element names in an image are input, so as to attain high-speed search for an image having a predetermined feature (Japanese Patent Laid-Open No. 5-242160).
However, in the first and second methods, since the position or size of a specific object in an image or the hue or the like that reflects the illumination condition is not known in advance, the following problems are posed.
First, since similarity must be calculated using a plurality of standard patterns (images representing identical objects having different sizes, positions, hues, and the like), a considerably large calculation amount and long calculation time are required.
Second, it is generally difficult to find and cut out a specific region having a feature close to that of a standard pattern for the same reason as in the first problem.
Third, the template size can be set in advance under only very limited image generation conditions. When the image generation conditions are not known, the same problem as the first problem is posed. Therefore, a very long calculation time is required for discriminating the presence/absence of a specific object, searching for an image including a specific object, and the like.
In the third method, in order to input regions of constituting elements and their names in an image, input interfaces such as a keyboard, mouse, and the like are required, and when a database of images actually sensed by an image sensing means is to be created, such search data must be created after the image sensing operation.
Furthermore, an application for searching a database of images sensed using an image sensing means for an image including an object intended to be generally the main object in the scene cannot be realized by conventional image processing methods that do not use any information upon image sensing.
As a general technique for extracting (cutting) an image, a chromakey technique using a specific color background, a videomat technique for generating a key signal by image processing (histogram processing, difference, differential processing, edge emphasis, edge tracking, and the like) (
Television Society technical report
, vol. 12, pp. 29-34, 1988), and the like are known.
As another apparatus for extracting a specific region from an image, in a technique disclosed in Japanese Patent Publication No. 6-9062, a differential value obtained by a spatial filter is binarized to detect a boundary line, connected regions broken up by the boundary line are labeled, and regions with an identical label are extracted.
A technique for performing image extraction based on the difference from the background image is a classical technique, and recently, Japanese Patent Laid-Open No. 4-216181 discloses a technique for extracting or detecting target objects in a plurality of specific regions in an image by setting a plurality of masks (=specific processing regions) in the difference data between background image and the image to be processed.
In a method associated with Japanese Patent Publication No. 7-16250, the distribution of probability of occurrence for the object to be extracted is obtained on the basis of the color-converted data of the current image including the background image, and the difference data between the lightness levels of the background image and the current image using a color model of the object to be extracted.
As one of techniques for extracting a specific object image by extracting the outer contour line of the object from an image, a so-called active contour method (M. Kass et al., “Snakes: Active Contour Models,”
International Journal of Computer Vision
, Vol. 1, pp. 321-331, 1987) is known.
In the above-mentioned technique, an initial contour which is appropriately set to surround an object moves and deforms (changes its shape), and finally converges to the outer shape of the object. In the active contour method, the following processing is typically performed. More specifically, a contour line shape u(s) that minimizes an evaluation function given by equation (1) below is calculated with respect to a contour line u(s)=(x(s), y(s)) expressed using a parameter s that describes the coordinates of each point:
E
=
∫
0
u
⁢
E
1
⁡
(
V
⁡
(
s
)
)
+
w
0
⁢
E
0
⁡
(
V
⁡
(
s
)
)
⁢
⁢
ⅆ
s
(
1
)
For
⁢
⁢
E
1
⁡
(
V
⁡
(
s
)
)
=
α
⁡
(
s
)
⁢
&LeftBracketingBar;
ⅆ
u
ⅆ
s
&RightBracketingBar;
2
+
β
⁡
(
s
)
⁢
&LeftBracketingBar;
ⅆ
2
⁢
u
ⅆ
s
2
&RightBracketingBar;
2
(
2
)
E
0
(
V
(
s
))=−|&Dgr;
I
(
u
(
s
))|
2
(3)
where I(u(s)) represents the luminance level on u(s), and &agr;(s), &bgr;(s), and w
0
are appropriately set by the user. In the technique (active contour method) for obtaining the contour line of a specific object by minimizing the above-mentioned evaluation function defined for a contour line, setting methods described in Japanese Patent laid-Open Nos. 6-138137, 6-251148, 6-282652, and the like are known as the setting method of an initial contour.
The chromakey technique cannot be used outdoors due to strict limitations on the background, and also suffers a problem of color omission. In the videomat technique, the user must accurately perform contour designation in units of pixels, thus requiring much labor and skill.
The technique using the difference from the background image cannot be normally applied when an image of only the background except for a specific object cannot be sensed (e.g., the object is huge), and the load on the user is heavy.
Since no image sensing conditions (camera parameters and external conditions such as illumination) are taken into consideration, discrimination errors of the region to be extracted from the difference data become very large unless the background image and the image including the object to be extracted are obtained under the same image sensing conditions and at the same fixed position. Also, the technique described in Japanese Patent Publication No. 7-16250 is not suitable for extracting an image of an unknown object since it requires a color model of the object to be extracted.
Of the initial contour setting methods of the above-mentioned active contour method, in Japanese Patent Laid-Open No. 6-138137, an object region in motion is detected on the basis of the inter-frame difference, and a contour line is detected on the basis of contour extraction (searching for the maximum gradient edge of a changed region) in the vicin
Canon Kabushiki Kaisha
Johnson Timothy M.
Morgan & Finnegan , LLP
LandOfFree
Object extraction method, and image sensing apparatus using... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Object extraction method, and image sensing apparatus using..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Object extraction method, and image sensing apparatus using... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3167443