Image feature extractor, an image feature analyzer and an...

Image analysis – Pattern recognition

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C382S124000, C382S197000

Reexamination Certificate

active

06243492

ABSTRACT:

BACKGROUND OF THE INVENTION
The present invention relates to an image feature extractor for extracting image features of a pattern image such as a skin pattern image represented by a fingerprint or a palm-print, an image feature analyzer for analyzing the image features, and an image matching system for collating the pattern image with those filed in a database.
In a following description, the present invention is explained in connection with a skin pattern feature extractor, a skin pattern feature analyzer and a skin pattern matching system applied to personal identification as representative examples of the above devices. However, it will be easily understood that the present invention can be widely applied to matching verification of pattern images having certain features, without limited to the skin pattern matching verification.
As a most orthodox art for identifying a skin pattern such as a fingerprint or a palm-print, there is known a matching verification method making use of feature points of the fingerprint called the minutia matching which is disclosed in a Japanese Patent entitled “Fingerprint Collation System” published as a specification No. 63-34508. However, a demerit of the minutia matching is that it takes a lot of computational time because of a large amount of data to be processed.
For lightening the demerit, there is proposed a technique for reducing a number of candidates to be verified by way of the Ten Print Card, which is disclosed in “Fingerprint Card Classification for Identification with a Large-Size Database” by Uchida, et al., Technical Report of IEICE (The Institute of Electronics, Information and Communication Engineers) PRU95-201 (January 1996). In the Fingerprint Classification of this technique, a high-speed rough matching is performed referring to certain features such as the basic fingerprint classification category, distances between singular points, namely, cores or deltas, etc., for pre-selecting candidates to be verified with the minutia matching. As the basic fingerprint classification categories, fingerprint pattern of each finger is classified into one of basic clasification categories, such as the Whorl, the Right Loop, the Left Loop or the Arch, and numbers of ridge lines and the distances between cores and deltas of each finger are extracted as other features together with their confidence values, for the rough matching of the technique.
Data amount of these features is at most several hundred bytes for a card, far smaller that the several thousand bytes necessary for the minutia matching. The smaller data amount, and consequently the smaller calculation amount enable a high-speed fingerprint verification.
Thus reducing number of candidates before the minutia matching, the pre-selection technique of the prior document by Uchida et al. enables to improve total cost performance of the fingerprint verification system as a whole.
However, because the deltas are usually found in edge parts of the fingerprint, the numbers of ridge lines and the distances between a core and a delta are often difficult to be extracted when a so called roll printing is not performed correctly. The roll printing means here to take the fingerprint pattern by pressing a fingertip rolling on a card, as usually performed when the police take the fingerprint. Furthermore, the above technique is not efficient to pre-select the Arch patterns which have no delta.
Therefore, methods for improving pre-selection performance have been sought. One promising solution is to find other appropriate features to be used for the rough matching in place of or in addition to the above features. One of the possible features is the direction pattern of fingerprint ridge lines. “Personal Verification System with High Tolerance of Poor Quality Fingerprints” by Sasagawa et al., the transactions of IEICE D-II, Vol. J72 -D-II, No. 5, pp. 707-714, 1989, is an example of usage of the direction pattern to the rough matching.
Feature value having such many dimensions as the direction pattern of the fingerprint ridge lines, however, makes difficult the high-speed matching verification. Therefore, when applying such feature value, it is necessary to find a feature extraction method enabling to extract a condensed feature value thereof having an appropriate dimension size giving good results and sufficient efficiency at the same time. A well-known method of such a feature extraction is the principal component analysis. An example of the method applied to the direction pattern of the fingerprint ridge lines is described in a paper of C. L. Wilson et al., entitled “Massively Parallel Neural Network Fingerprint Classification System”, NISTIR 4880, 1992, July, published by National Institute of Standards and Technology.
In the method of C. L. Wilson et al., the direction pattern is processed as follows.
First, training data of N fingerprint images are prepared. Then, defining feature vectors composed of ridge directions each representing local flow of ridge lines, the principal component analysis is performed for distribution of the feature vectors to obtain principal component vectors having larger eigenvalues. Using the principal component vectors thus obtained, Karhunen Loève Transform (hereafter abbreviated as KLT) of the feature vector of an objective fingerprint image is performed, whereof feature values of upper dimensions are extracted as the condensed feature value.
In the following paragraphs, more details of the feature extraction using the KLT are described.
For the first, a variance-covariance matrix V is calculated as follows from feature vectors u
i
(i=1 to N) of the fingerprint images prepared for the training.
V
=
1
N
-
1




i
=
1
N



(
u
i
-
u
_
)

(
u
i
-
u
_
)
t
(
1
)
u
_
=
1
N




i
=
1
N



u
i
(
2
)
Here, the feature vectors u
i
are column vectors of M dimensions, {overscore (u)}is a mean vector of u
i
, and the upper suffix
&igr;
means a transpose.
Eigenvalues of the variance-covariance matrix V being represented by &lgr;
i
(i=1, . . . , M; &lgr;
i
>&lgr;
i+1
), eigenvalues corresponding to the eigenvalues &lgr;
i
are represented by &PSgr;
i
. These eigenvectors &PSgr;
i
are the principal component vectors and one corresponding to a largest eigenvalue &lgr;
1
is called a first principal component vector, followed by a second, a third, . . . , a M-th principal component vector in the order of corresponding eigenvalue.
For a feature vector u of an objective fingerprint image, which is not used for the training, projection of the feature vector u to a partial space defined by one or more primary (corresponding to largest eigenvalues) principal component vectors &PSgr;
i
is calculated, that is, projection components v
i
around the mean vector {overscore (u)}are obtained as follows:
v
i
=&PSgr;
i
&igr;
(u−{overscore (u)})
The projection components v
i
thus obtained are the feature values extracted by way of the KLT.
In C. L. Wilson et al., a feature vector composed, as its components, of primary projection components v
i
corresponding to the primary principal component vectors is input to a neural network for the fingerprint classification. However, these extracted feature values are used only for the classification and any application to the matching verification is not taught therein.
Although it is not applied to the skin pattern analysis, there is described an example of a facial image matching verification making use of feature values obtained by the KLT, in “Eigenfaces for Recognition” by A. Pentland et al., MIT Media Lab Vision and Modeling Group, Technical Report #154 and also in “Probabilistic Visual Learning for Object Detection” by B. Moghaddam and A. Pentland, pp. 786-793 of Proceedings of the 5-th International Conference on Computer Vision, 1995. In these two documents, a method of facial image matching verification is described, wherein a vector consisting of each pixel value of a facial image as each component thereof is used as the sourc

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Image feature extractor, an image feature analyzer and an... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Image feature extractor, an image feature analyzer and an..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Image feature extractor, an image feature analyzer and an... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-2437928

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.