Neural network arithmetic apparatus and neutral network...

Data processing: artificial intelligence – Neural network – Learning task

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C706S015000, C706S033000, C706S039000

Reexamination Certificate

active

06654730

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to a neural network arithmetic apparatus and a neural network operation method, and more particularly to a neural network arithmetic apparatus and neural network operation method that perform neuron operations in parallel by plural arithmetic units.
2. Description of the Prior Art
A neural network built by imitating information processing in a brain-based nervous system finds application in information processing such as recognition and knowledge processing. Such a neural network is generally configured by connecting a large number of neurons, which transmit their output signals to each other.
An individual neuron j first calculates the sum of a neuron output value Y
i
from another neuron i, which is weighed by a synapse connection weight W
ji
. Then the a neuron output value Y
j
is generated by converting the summation by a sigmoid function f. The operation is represented as shown by an equation (1) below, where i and j are any integer.
Y
j
=
f

(

i



W
ji
·
Y
i
)
(
1
)
This operation is called a neuron operation. In a learning process by back propagation generally used, for a given input, an expected output value d
j
(that is, a teacher signal) is afforded from the outside and synapse connection weights w
ji
are updated so that an error &dgr;
j
(=d
j
−Y
j
) from an actual output value becomes small. The update amount is calculated by an equation (2) below.
 &Dgr;
W
ji
=&eegr;·&dgr;
j
·Y
i
  (2)
&eegr; is a learning coefficient and &dgr;
j
is a learning error. In an output layer, operations are performed using an equation (3) below.
&dgr;
j
=(
d
j
−Y
j
)·ƒ′(
u
j
)  (3)
In a hidden layer, operations are performed using an equation (4) below.
δ
j
=
(

k



W
kj

δ
k
)
·
f


(
u
j
)
(
4
)
To perform these operations in a large-scale neural network having thousands to tens of thousands of neurons, an enormous amount of operation is required, requiring dedicated hardware.
As a prior art, the following information processing system is proposed in Japanese Published Unexamined Patent Application No. Hei 5-197707. In this system, as shown in
FIG. 29
, plural arithmetic units
60
1
to
60
x
having synapse connection weights
62
1
to
62
x
(x is an integer) respectively are coupled in parallel by a time-shared bus
64
connected to a controller
66
.
In the information processing system, the arithmetic units
60
1
to
60
x
are responsible for processing specific neurons and one arithmetic unit (a second arithmetic unit
60
2
in
FIG. 29
) selected by the controller
66
outputs a neuron output value to the time-shared bus
64
.
The arithmetic units
60
1
to
60
x
which hold synapse connection weights between outputting arithmetic unit (the second arithmetic unit
60
2
in
FIG. 29
) and their own in their memory, accumulates a value inputted from the time-shared bus
64
weighted by the corresponding synapse connection weight in their memory.
An arithmetic unit (the second arithmetic unit
60
2
in
FIG. 29
) selected by the controller
66
converts a value resulting from the accumulative additions by, e.g., a sigmoid function f (the above equation (1)) and outputs the result to the time-shared bus
64
. Output from all the arithmetic units
60
1
to
60
x
to the time-shared bus
64
means that all the arithmetic units
60
1
to
60
x
have performed the equation (1).
The invention disclosed in Japanese Published Unexamined Patent Application No. Hei 5-197707 constitutes a large-scale neural network by a parallel operation algorithm formed as described above.
However, since the prior art system has a large number of arithmetic units connected to the time-shared bus, a clock of the time-shared bus cannot be increased, which means neuron output values cannot be rapidly supplied to the arithmetic units. That is, the inability to speed up a bus transfer clock causes a bottleneck in the speed of transmitting neuron output values, posing the problem that a remarkable increase in processing speed is not achieved.
Since data is simultaneously supplied to all the arithmetic units, unnecessary data is also received. These facts cause the arithmetic units to be limited in data supply rate, posing the problem that operations cannot be performed rapidly.
To solve the above problems, it is conceivable to provide all necessary neuron output values as well as synapse connection weights in the memory of the arithmetic units. However, a limited capacity of the memory makes it impossible to store all neuron output values in the event that the scale of the neural network becomes larger. The other approach to solve the problem is to hold all the neuron output values distributively in plural arithmetic units. Also in this case, there is the problem that transmission speed of neuron output values causes a bottleneck, because a neuron arithmetic unit needs neuron output values stored in memories within other arithmetic units to perform neuron operations.
SUMMARY OF THE INVENTION
The present invention has been made in view of the above circumstances and provides a neural network arithmetic apparatus and a neural network operation method that, when a neural network is computed in parallel using a large number of arithmetic units, enable the arithmetic units to operate independently and rapidly, and do not cause reduction in processing speed by the number of arithmetic units increased to meet the scale of a network.
To solve the above circumstances, a neural network arithmetic apparatus according to an aspect of the present invention performs neuron operations in parallel by plural arithmetic elements, connected over at least one transmission line, to each of which a predetermined number of neurons of plural neurons making up a neural network are assigned. In the apparatus each of the plural arithmetic elements includes: a synapse connection weight storage memory that stores synapse connection weights of at least part of all synapses of one neuron for a predetermined number of assigned neurons; and an accumulating part that, during a neuron operation, successively selects the predetermined number of neurons and successively selects synapses of the selected neuron, multiplies the synapse connection weight of the selected synapse by the neuron output value of a neuron of a preceding stage connected with the synapse, accumulates the result for an identical neuron, and outputs an obtained value as a partial sum of neuron operation value. Each of the plural arithmetic elements further includes a neuron output value generating part that generates a neuron output value by accumulating partial sums of neuron operation values outputted by the plural arithmetic elements until the values of all synapses of one neuron are added.
That is, since each of plural arithmetic elements, connected over at least one transmission line, to each of which a predetermined number of neurons of plural neurons making up a neural network are assigned, has a synapse connection weight storage memory that stores synapse connection weights of at least part of all synapses of one neuron, and an accumulating part, neuron operations on a predetermined number of assigned neurons can be performed independently in units of operation elements.
Each arithmetic element can be utilized to calculate not only a partial sum of neuron operation value but also a partial sum of error signal operations.
Therefore, unlike a conventional approach, arithmetic elements for neuron operations and arithmetic elements for error signal operations need not be provided separately, and operations of a neural network can be performed using fewer arithmetic elements than have been conventionally required. Consequently, a neural network arithmetic apparatus is obtained which can perform operations of a large-scale neural network without decreasing operation speed by using almost the same number or fewer arithmetic eleme

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Neural network arithmetic apparatus and neutral network... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Neural network arithmetic apparatus and neutral network..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Neural network arithmetic apparatus and neutral network... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3128067

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.