Data processing: artificial intelligence – Neural network
Reexamination Certificate
1999-03-31
2001-04-17
Powell, Mark R. (Department: 2122)
Data processing: artificial intelligence
Neural network
C706S015000, C706S016000, C706S017000
Reexamination Certificate
active
06219658
ABSTRACT:
TECHNICAL FIELD
The invention concerns a method for teaching a neurone to classify data in two classes separated by a 1st-order separating surface, that is to say a hyperplane, or by a 2nd-order separating surface, for example a hypersphere.
This invention finds applications in fields using neural networks and, in particular, for medical diagnosis, recognition of shapes or classification of objects or data such as spectra, signals, etc.
STATE OF THE ART
Neural networks are systems which perform, on numerical information, calculations inspired by the behaviour of physiological neurones. A neural network must therefore learn to perform tasks which it is wished it will carry out subsequently. For this purpose, an example base, or learning base, is used, which contains a series of known examples used for teaching the neural network to perform the tasks which it will have to reproduce subsequently with unknown information.
A neural network is composed of a set of formal neurones. The neurones are connected to each other by connections which each have a synaptic weight. Each neurone (also referred to as a generalised perceptron) is a calculation unit consisting of an input and an output corresponding to a state. At each instant, the state of each neurone, weighted by the synaptic weight of the corresponding connection, is communicated to the other neurones in the set. The total level of activation which the neurone receives, at its input, is equal to the sum of the weighted states of the other neurones. At each instant, this activation level is used by this neurone to update its output.
At the current time there exist several learning algorithms making it possible to teach a neurone, intended to be used within a neural network, to perform a precise task, such as classifying data (or patterns).
The quality of these algorithms is judged from two different aspects, according to the type of task to be performed. A first case is the one where the examples of the learning set can all be learnt by the neurone. In this case, the algorithm must be capable of finding the weights which correctly classify not only the examples but also new patterns, that is to say the neurone must have weights capable of effectively generalising. A second case is the one where it is not possible for the neurone to learn all the examples. Then the algorithm must find the weights which minimise the number of learning errors. However, there is no learning algorithm which gives satisfactory weights in both cases described.
A neurone learning algorithm is described in the document “A thermal perceptron learning rule” by Marcus Frean, in Neural Computation No 4, pages 946-957 (1992).
This algorithm makes it possible to determine weights, but it does not give the optimum solution to the problem to be resolved. Consequently such an algorithm does not minimise the number of learning errors and does not generalise in an optimum fashion.
Another neurone learning algorithm is described in the document “Perceptron-based learning algorithms” by S. I. Gallant, in IEEE Transaction on Neural Networks Vol. 1 No 2, June 1990.
This algorithm makes it possible to obtain good results in the case of neurones capable of learning all the learning examples, provided that convergence of the algorithm is awaited. However, where the learning set cannot be learnt by a single neurone, this algorithm does not converge. It is therefore impossible to know whether the algorithm does not “stop” because it has not found a solution or because sufficient time has not been waited for.
DISCLOSURE OF THE INVENTION
The aim of the invention is precisely to resolve the drawbacks of the techniques proposed previously. To this end, it proposes a method or algorithm which converges whatever the case encountered and which in addition has optimum performance. In other words, the invention proposes a method for teaching a neurone to classify data into two classes separated by a separation surface which can be either linear or quadratic.
Hereinafter, either neurone or generalised perceptron will be spoken of indifferently, neurone or perceptron representing binary units. As an indication, it may be noted, for example, that a linear perceptron is a neurone which produces linear separation surfaces.
The invention concerns, more precisely, a method for teaching a neurone with a quadratic activation function to classify data according to two distinct classes separated by a separating surface, this neurone being a binary neurone having N connections coming from the input and receiving as an input N numbers representing a data item, or pattern, intended to be classified using a learning base containing a plurality of known data, each input of the neurone being affected by the weight (w
i
) of the corresponding connection,
characterised in that it includes the following steps:
a) defining a cost function C by determining, for each neurone and as a function of a parameter describing the separating surface, a stability &ggr;
&mgr;
of each data item &mgr; of the learning base, the cost function being the sum of all the costs determined for all the data in the learning base with:
C
σ
=
∑
μ
=
1
(
γ
μ
>
o
)
P
⁢
[
A
-
B
⁢
⁢
tanh
⁢
⁢
σ
⁢
⁢
γ
μ
2
⁢
T
+
]
+
∑
μ
=
1
(
γ
μ
≤
0
)
P
⁢
[
A
-
B
⁢
⁢
tanh
⁢
⁢
σ
⁢
⁢
γ
μ
2
⁢
T
-
]
where A is any value, B is any positive real number, P is the number of data items in the learning base, &ggr;
&mgr;
is the stability of the data item &mgr;; T+ and T− are two parameters of the cost function (referred to as “temperatures”); &sgr; takes the value +1 if it has been chosen that &ggr;
&mgr;
>0 an corresponds to a correct and &ggr;
&mgr;
<0 an incorrect classification or the value −1 in the contrary case.
b) initialising the weights w
i
, the radii r
i
, the parameters T+ and T−, with T+<T−, a learning rate &egr; and speeds of the temperature decreasing &dgr;T+ and &dgr;T−;
c) minimising, with respect to the weight of the connections W
i
and the radii, the cost function C by successive iterations during which the parameters T+ and T− decrease at speeds of the temperature decreasing &dgr;T+ and &dgr;T− as far as a predefined stop criterion;
d) obtaining values of the weights of the connections and the radii of the neurone.
These values of the connection weights and radii are, for example, displayed on a screen, recorded in computer files.
According to one embodiment of the invention, the stability of each data item is:
γ
μ
=
y
μ
⁢
ln
⁡
[
∑
i
=
1
N
⁢
⁢
(
w
i
-
x
i
μ
)
2
r
i
2
]
,


⁢
with
⁢
⁢
y
=
+
1
⁢
⁢
or
⁢
-
1
,
and
where &mgr; is the label of the pattern, x
&mgr;
i
is the value of the pattern &mgr; for the i
th
input, y
&mgr;
is the class of the pattern &mgr;, M is the number of inputs and connections of the neurone, w
i
is the weight of the connection between the input i and the neurone and r
j
is the radius parameter for the i
th
input.
According to another embodiment of the invention, the stability of each data item is
γ
μ
=
y
μ
⁡
[
∑
i
=
1
N
⁢
[
⁢
(
w
i
-
x
i
μ
)
2
-
r
i
2
]
]
with
⁢
⁢
y
=
+
1
⁢
⁢
or
⁢
-
1
,
and
where &mgr; is the label of the pattern, x
&mgr;
i
is the value of the pattern &mgr; for the i
th
input, y
&mgr;
is the class of the pattern &mgr;, N is the number of inputs and connections of the neurone, w
i
is the weight of the connection between the input i and the neurone and r
i
is the radius parameter for the i
th
input.
REFERENCES:
patent: 5058179 (1991-10-01), Denker et al.
patent: 5130563 (1992-07-01), Nabet et al.
patent: 5153923 (1992-10-01), Matsuba et al.
patent: 5303328 (1994-04-01), Masui et al.
patent: 5748847 (1998-05-01), Lo
G. C. Fox and W. Furmanski; Load balancing loosely synchronous probl
Commisariat a l'Energie Atomique
Oblon & Spivak, McClelland, Maier & Neustadt P.C.
Powell Mark R.
Starks Wilbert
LandOfFree
Method for learning data classification in two separate... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Method for learning data classification in two separate..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method for learning data classification in two separate... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2538369