Electrical computers: arithmetic processing and calculating – Electrical digital calculating computer – Particular function performed
Reexamination Certificate
1998-09-09
2001-04-17
Malzahn, David H. (Department: 2121)
Electrical computers: arithmetic processing and calculating
Electrical digital calculating computer
Particular function performed
C359S107000
Reexamination Certificate
active
06219682
ABSTRACT:
BACKGROUND OF THE INVENTION
The present invention relates to a vector normalizing apparatus and, more particularly, to a vector normalizing apparatus used in a competitive learning system wherein competitive learning for topological mapping, pattern recognition or the like is executed by deciding a winner element meeting a certain distance measure using an inner product operation, and then performing some operation on the winner element and some elements determined by the winner element.
There are well known competitive learning algorithms that execute topological mapping or pattern recognition by deciding a winner element that meets a certain distance measure, and performing some operation on the winner element and some elements determined by the winner element (T. Kohonen, “Self-Organization and Associative Memory”, Third Edition, Springer-Verlag, Berlin, 1989).
These algorithms have a competitive process for selecting a winner element that meets a distance measure, e.g. the Euclidean distance, the Manhattan distance or the inner product, with respect to a certain input. In a case where such a competitive learning algorithm is executed on a computer, any distance measure can be readily used; however, the Euclidean distance is frequently used, which is generally reported to exhibit excellent performance as a distance measure. However, it takes a great deal of time to process large-capacity data, e.g. images.
To perform a Euclidean distance calculation on hardware in order to process large-capacity data, e.g. image, at high speed, it is necessary to use an electrical difference circuit, an electrical square circuit, an electrical summation circuit. Accordingly, the overall size of the circuits becomes exceedingly large; therefore, it is difficult to realize a Euclidean distance calculation on hardware in the present state of the art. Meanwhile, an algorithm using the inner product as a distance measure does not need an electrical difference circuit. If this algorithm is realized by using optical hardware, high-speed processing can be effectively performed because it is possible to realize an inner product operation while utilizing the nature of light, i.e. high-speed and parallel propagation. Some competitive learning systems that execute an inner product operation by an optical system have already been proposed [e.g. Taiwei Lu et al., “Self-organizing optical neural network for unsupervised learning”, Opt. Eng., Vol.29, No.9, 1990; J. Duivillier et al., “All-optical implementation of a self-organizing map”, Appl. Opt., Vol.33, No.2, 1994; and Japanese Patent Application Unexamined Publication (KOKAI) Nos. 5-35897 and 5-101025].
When competitive learning is performed by using the inner product as a distance measure, the accuracy of competitive learning tends to become lower than in the case of using the Euclidean distance. This may be explained as follows.
As shown in
FIG. 1
, let us consider a two-dimensional vector X as an input vector and candidates ml and m
2
for a weight vector meeting a certain distance measure with respect to the input vector X. When the Euclidean distance is used, a weight vector which is at the shortest distance from the input vector becomes a winner element; therefore, m
1
becomes a winner element from the relation between the distances d
1
and d
2
in
FIG. 1
, i.e. d
1
<d
2
. When the inner product is used, a weight vector having the largest inner product value is equivalently most similar to the input vector and becomes a winner element. In
FIG. 1
, the inner product value is expressed by the product of the orthogonal projection D
i
(i=1, 2) on X of m
i
(i=1, 2) and the L2-norm of X. It should be noted that L2-norm represents the square root of the square sum of vector components. Size comparison between the inner products can be made by comparing the sizes of D
i
. However, in this case, D
1
<D
2
, and hence, m
2
is unfavorably selected as a winner element. Thus, when the inner product is used, even if a weight vector with a relatively large L2-norm is at a relatively long Euclidean distance from the input vector, the inner product value may become relatively large, resulting in a higher degree of similarity. Accordingly, such a weight vector is likely to become a winner. That is, the degree of similarity in the inner product depends on the L2-norm of each vector. Therefore, L2-norm should preferably be made uniform to perform effective competitive learning. A simple method of uniformizing L2-norm is to divide (normalize) each vector component by L2-norm. By doing so, as shown in
FIG. 2
, the relations of d
1
<d
2
and D
1
>D
2
are obtained. Consequently, m
1
becomes a winner element whether the Euclidean distance or the inner product is used. Accordingly, effective competitive learning can be performed even if the inner product is used.
It should be noted that there are other norms in addition to L2-norm, i.e. L1-norm, L∞-norm, etc. No matter which norm is used, it is possible to perform competitive learning that is effective to a certain extent. Among them, L2-norm is the best. The reason for this will be explained later.
In general, when L1-norm, L2-norm or other norm is uniformized by normalization, information concerning the norm of the original vector is lost. For example, if each component of the vector is divided by L2-norm to uniformize L2-norm, the original L2-norm information is lost. In the case of competitive learning for identifying only the direction of vector data, the norm information of the data may be lost. However, ordinary competitive learning is not always performed to identify only the direction of vector data. In many cases, satisfactory learning cannot be effected if the norm information is lost.
SUMMARY OF THE INVENTION
Accordingly, we give the following condition as a requirement that should be satisfied by the vector normalizing apparatus according to the present invention:
(A-1) Information concerning the norm of the original vector is not lost by normalization.
In general, normalization is effected by dividing each vector component by the norm that is to be normalized. However, this normalization process involves problems such as the undesired divergence of data when the norm is zero, for example. Therefore, it is desirable for the vector normalizing apparatus to need no device that divides vector components by norm.
Accordingly, we add the following condition as another requirement that should be satisfied by the vector normalizing apparatus according to the present invention:
(A-2) The vector normalizing apparatus needs no device that divides vector components by norm.
In view of the above-described problems, an object of the present invention is to provide a vector normalizing apparatus that satisfies the conditions (A-1) and (A-2), i.e. a vector normalizing apparatus in which information concerning the norm of the original vector is not lost by normalization, and which needs no device that divides vector components by norm.
To attain the above-described object, the present invention provides a vector normalizing apparatus including a vector input device for entering a vector, and an additional component calculating device that receives the vector entered through the vector input device and calculates an additional component to be added to the entered vector such that norm of the vector after the addition of the component becomes constant. The vector normalizing apparatus further includes a vector component adding device for adding an output from the additional component calculating device as a component of the entered vector.
In this case, it is desirable that the vector normalizing apparatus should further include a transformation device that performs transformation for each component of the vector entered through the vector input device to limit the range of values which each component of the entered vector may assume.
It is also desirable that the range of values which the output from the additional component calculating device may assume should be the same
Shiratani Fumiyuki
Terashima Mikihiko
Malzahn David H.
Olympus Optical Co,. Ltd.
Pillsbury & Winthrop LLP
LandOfFree
Vector normalizing apparatus does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Vector normalizing apparatus, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Vector normalizing apparatus will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2518429