Selective attention method using neural network

Data processing: artificial intelligence – Neural network – Learning method

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C706S014000, C706S015000, C706S016000, C706S022000, C706S031000

Reexamination Certificate

active

06601052

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to a neural network and learning method thereof, e.g., error back-propagation method and, more particularly, to a selective attention method using neural networks in which the error back-propagation method is applied to an input layer to change an input value rather than a weight values of the neural networks and use the difference between the new input value and the original one as a criterion for recognition, wherein a selective filter is added before the conventional recognition networks in order to change the input value and technologically simulate the selective attention mechanism occurring in the human brain, thus implementing a system applying the selective attention mechanism to perception of patterns such as voice or character.
2. Description of the Related Art
Generally, selective attention means concentrating an attention to a specific one of at least two simultaneous input information or signals based on the significance of the information or signal. This selective attention is a phenomenon naturally occurring in the human brain.
For example, every man can recognize a desired voice signal in the situation where many people talk at the same time, from a difference between the voice signals in the frequency or the location of the voice source. Such a selective attention is the one of the subjects of which the principle has been studied in the psychology for a long time with respect to both human and animals.
Many studies have been made on the selective attention mechanism in the field of the psychology and neuroscience and are classified into two categories: the one category is the initial selection theory that unwanted signals of several stimuli are filtered out through a selective filter prior to processing the stimuli in the brain; and the other is a theory that all signals are transferred to the processor of the brain but the brain responses more strongly to the more important signals.
These theories are still in a heated controversy and the prevailing opinion is that a combination of the two theories explains the selective attention mechanism in the human brain.
There is an attempt to technologically simulate the selective attention mechanism for more effective recognition of actual voices or characters. Although the existing studies have well simulated the selective attention mechanism in the human brain in an aspect of the neuroscience, they are meaningful only in the aspect of the biology and much hard to applying to the actual recognition. Also, there are many difficulties to implement the results of the studies used in the actual recognition in software or hardware due to extreme complexity of the structure.
The representative one of the systems developed to overcome the above-stated problems is a multi-layer perceptron system, which is widely applied to the mechanical brain called “neuron” or “neural network”. The multi-layer perceptron system enables recognition or judgment of potential information (pattern) through iterative learning of a defined pattern.
Although the neural networks in the form of a mechanical brain according to the multi-layer perceptron system have excellent adaptability to a repeatedly acquired pattern, they are disadvantageous in that the recognition performance is abruptly deteriorated for input patterns different from the acquired pattern used during training phase.
A description will now be made in detail with reference to
FIG. 1
as to the drawbacks of the prior art.
Referring to
FIG. 1
, a typical multi-layer perceptron is a neural network having a layer structure that includes at least one intermediate layer between input and output layers, i.e., it is a series of several single-layer perceptrons.
An input value applied to an input layer is multiplied by the weighted value of a synapse linked to the individual neurons and the resulting values are summed at the neuron of the adjacent intermediate layer. The output value of the neuron is transferred to the next intermediate layer. This process is performed in an iterative manner until the output layer. That is, an input value of the j'th neuron of the l'th intermediate layer, as denoted by ĥ
j
l
is calculated according to Equation 1.
h
^
j
1
=
w
j0
1
+

k
=
1
N



w
jk
1

h
k
l
-
1
=

i
=
0
N



w
jk
1

h
k
l
-
1
[
Equation



1
]
where w
j0
l
represents the bias of ĥ
j
l
; w
jk
l
represents the weighted value of a synapse linking the k'th neuron of the (l−1)'th intermediate layer to the j'th neuron of the l'th intermediate layer; h
k
l−1
represents the output value of the k'th neuron of the (l−1)'th intermediate layer; and variable N represents the number of the neurons of the (l−1)'th intermediate layer.
Thus the output value from the input ĥ
j
l
of the j'th neuron in the l'th intermediate layer is defined as Equation 2.
h
j
1
=
f

(
h
^
j
1
)
=
2
1
+
exp

(
-
h
^
j
1
)
[
Equation



2
]
For a correct operation of the above-structured multi-layer perceptron as a perception means, it is a requisition that the synapses linking the individual neurons have an adequate weighted value, of which the determination involves a learning process of the multi-layer perceptron and performed by the layer according to the error back-propagation algorithm.
The learning process of the multi-layer perceptron involves receiving P learning patterns at the input layer, determining a desired output value corresponding to the individual learning patterns as a target value, and calculating the weighted value of a synapse which minimizes the MSE between the actual output value and the target value of the output layer.
Accordingly, the MSE can be calculated according to Equation 3.
E
=
1
2


P
=
1
N



&LeftBracketingBar;
&RightBracketingBar;

t
p
-
y
p

&LeftBracketingBar;
&RightBracketingBar;
2
[
Equation



3
]
where P learning patterns are x
p
(p=1, 2, . . . , P). y
p
is an output vector; and t
p
is a target vector.
In the error back-propagation system, the weighted value of the output layer is iteratively applied according to Equation 4 in order to minimize the MSE from the Equation 3.
w
ij

(
new
)
1
=
w
ij

(
old
)
1
+
ηδ
j
1

h
i
l
-
1
[
Equation



4
]
where constant &eegr; represents a learning rate; and &dgr;
j
l
represents a differential value of the error for the output layer with respect to the neuron value of the individual intermediate layers. The differential value of the error for the output layer can be defined as Equation 5.
δ
i
L
=
(
L
i
-
y
i
)

f


(
y
^
i
)
I

output



layer
;
and
δ
i
1
=
-
f


(
h
^
j
1
)


i

δ
i
l
-
1

w
ij
l
+
1
,
I

intermediate



layer
.
[
Equation



5
]
In summary, the conventional error back-propagation system according to the above equations is an algorithm repeating for P learning patterns, which involves calculating the total errors of the output layer as Equation 3 from given input and target vectors through the feed-forward propagation according to Equation 1, and differentiating the errors of the output layer with respect to the neuron value of the individual intermediate layers as defined in Equation 5 to change the weighted value of the synapse and thus minimize the total errors of the output layer.
Such a multi-layer perceptron that involves simple iterative calculations to divide given input patterns into several classes is a traditional neural net popular in solving the problem relating to pattern recognition. However, the multi-layer perceptron is problematic in that perception performance may rapidly deteriorate with regard to inputs different from the already learnt patterns. Accordingly, the present invention is directed to adding a selective filter using t

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Selective attention method using neural network does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Selective attention method using neural network, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Selective attention method using neural network will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3002829

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.