Image analysis – Applications – Personnel identification
Reexamination Certificate
1999-11-12
2003-08-12
Mehta, Bhavesh M. (Department: 2625)
Image analysis
Applications
Personnel identification
C382S118000, C382S291000, C340S575000, C340S576000, C351S205000
Reexamination Certificate
active
06606397
ABSTRACT:
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to a face image processing apparatus for an eyewink detection, including extracting naris areas, which is relatively easy to be extracted, through image processing, then estimating the position of eyes based on the position of nares to extract eye areas, and detecting opening/closing states of the eyes from shape features of the eye areas.
2. Description of the Related Art
Conventional face image processing apparatuses include a face image processing apparatus described in Japanese Patent Application Laid-Open No. Hei 8-300978, as an example. This face image processing apparatus is characterized by a binary image on which eye and naris areas are extracted and the opening and closing of the eyes are detected.
FIG. 9
is a schematic structural view showing the face image processing apparatus described in Japanese Patent Application Laid-Open No. Hei 8-300978.
In this figure, a multi-valued image captured by a camera
2
serving as image input means for photographing the person to be detected
1
is temporarily stored in a multi-valued image memory
3
serving as multi-valued image storage means. The stored image is then converted into a binary image by binarization means
4
, which is then temporarily stored in a binary image memory
5
serving as binary image storage means. Feature extraction means
6
extracts binary areas of the eyes and nares, and open/close detection means
7
detects opening/closing states of the eyes based on the shape feature of the eyes.
The operation will now be described with reference to
FIGS. 10
to
12
.
FIG. 10
is a flow chart showing an eye tracing algorithm in the face image processing apparatus shown in FIG.
9
.
Referring to
FIG. 10
, a multi-valued image of the person to be detected
1
is obtained by the camera
2
photographing a face image (Step S
1
). A process of camera control is also involved herein. The multi-valued image obtained at Step S
1
is temporarily stored frame by frame in the multi-valued image memory
3
at Step S
2
. The multi-valued image obtained at Step S
2
is converted into a binary image by the binarization means
4
(Step S
3
), and is then temporarily stored frame by frame in the binary image memory
5
(Step S
4
).
Then, the feature extraction means
6
extracts nares out of the binary image outputted from the binary image memory
5
(Step S
5
), and both eye areas are estimated from configuration conditions of the face based on the position of nares, to set eye candidate areas (Step S
6
). Subsequently, at Step S
7
, eye areas are selected from the eye candidate areas; provided, however, that the shape feature or the deviation in the position on the previous screen is used to select the eye candidate areas. Subsequent to the initial extraction, positional estimations for both the eyes and the nares are carried out using the position on the previous screen as reference. Typically, the position of eyes is extracted using the position of nares as reference, but the case where nares have not been extracted will permit a complement with the position of eyes. If the eye binary area is selected at Step S
7
, the open/close detection means
7
detects open/close (Step S
8
), back to Step S
1
.
FIG. 11
is an example of error extraction of eyebrows when the eye binary area is selected at Step S
7
as described above. Some of the eye binary area may often be lost in this way due to the illumination condition for the person to be detected or when a binarization threshold is much larger than the proper value. If eyes and eyebrows are involved in the eye candidate binary areas
16
, an eyebrow similar in shape may be erroneously extracted. In addition, once an eyebrow is erroneously extracted, an inaccurate trace occurs because the relative position of the eye binary area to the naris binary area
12
is similar to the relative position of the eyebrow to the eye. This results in a difficult return to an accurate eye trace.
As opposed to
FIG. 11
, if the binarization threshold is lower than the proper value, the shaded outer canthus of the left eye may be involved in the eye binary area (see, FIG.
7
(
b
)). The open/close detection means
7
provides an open/close detecting eye cut area
15
which is set using an eye centroid
11
as reference. Therefore, if there is any undesired binary area involved, the eye cut area will be set deviating from the actual eye area, and no correct open/close detection can be carried out.
FIG. 12
is a flow chart showing a specific operation for the open/close detection at Step S
8
in FIG.
10
.
A centroid position is set at Step S
10
in the eye binary area selected at Step S
7
. At Step S
11
, an eye area is cut for the open/close detection so that the right-hand and left-hand portions in the horizontal direction of the face may be balanced using the eye centroid as reference. At Step S
12
, the shape feature of the outer canthus is extracted on the eye binary area within the open/close detecting eye cut area, regarded as an eye evaluation functional value. At Step S
13
, the eye evaluation functional value is compared to an open/close threshold, leading to an eye opening detection at Step S
14
or an eye closure detection at Step S
15
in accordance with the result. When the eye closure detection is resulted, the time period of closing the eye is counted at Step S
16
.
Alternatively, conventional face image processing apparatuses using a template include a face image processing apparatus described in Japanese Patent Application Laid-Open No. Hei 8-175218, for instance. This face image processing apparatus is characterized by including template production means for producing an objective template for the person to be detected, in which a preset standard face template is successively vertically and horizontally moved across a picked-up image to perform a correlative operation, and eye area detection means for detecting an eye area of the person to be detected using such an objective template.
FIG. 13
is a schematic structural view showing the face image processing apparatus described in Japanese Patent Application Laid-Open No. Hei 8-175218.
Referring to
FIG. 13
, an image processor
31
is connected to a camera
2
for photographing the person to be detected
1
, and a face image of the person to be detected
1
is then delivered to the image processor
31
. The image processor
31
incorporates an A/D converter, a normalization circuit, and a correlative operation circuit, in which an input image signal is converted into a digital signal to be then normalized for a light and shade image. A memory
32
is also connected to the image processor
31
. A standard template and configuration data of the face elements such as eyes and eyebrows are in advance stored in the memory
32
. The image processor
31
is further connected to an ECU (Electronic Control Unit)
33
and delivers the processing results to the ECU
33
. The ECU
33
is so arranged as to determine the operation state of the person to be detected
1
from the processing results and to output a control signal to an alarm
34
to thereby issue an alarm.
However, there were problems as below with the conventional face image processing apparatus previously described in Japanese Patent Application Laid-Open No. Hei 8-300978.
Namely, binarization means converts a multi-valued image into a binary image for the image processing. In this case, gray information in the multi-valued image is eliminated. For this reason, the binarization threshold must be controlled every screen in accordance with the brightness of the face image of the person to be detected who was photographed, in order to correctly recognize the shape feature of the eyes
ares. The extraction results of the feature is greatly varied depend upon this binarization threshold.
Further, the binary shape feature unstably varies due to influence of the binarization threshold. For example, the entire portion of the face may vary in brightness or the face orientation may be changed during the
Mehta Bhavesh M.
Mitsubishi Denki & Kabushiki Kaisha
Sughrue & Mion, PLLC
Sukhaphadhana Christopher
LandOfFree
Face image processing apparatus for extraction of an eye... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Face image processing apparatus for extraction of an eye..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Face image processing apparatus for extraction of an eye... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3127930