Electrical computers and digital processing systems: support – Computer power control – Power sequencing
Reexamination Certificate
1998-11-24
2001-03-20
Coleman, Eric (Department: 2783)
Electrical computers and digital processing systems: support
Computer power control
Power sequencing
C712S040000
Reexamination Certificate
active
06205556
ABSTRACT:
BACKGROUND OF THE INVENTION
The present invention relates to a data processing system having a memory packaged therein for realizing a large-scale and fast parallel distributed processing and, more specifically, to a neural network processing system.
The parallel distributed data processing using the neural network called the “neuro-computing” (as will be shortly referred to as the “neural network processing”) is noted in the field of acoustics, speech and image processing, as described in either on pp. 145-168, “Parallel networks that learn to pronounce English text. Complex Systems 1 by Sejnowski, T. J., and Rosenberg, C. R. 1987, or “Neural Network Processing” published by Sangyo Tosho and edited by Hideki Asou. In the neural network processing, a number of processing elements called the “neurons” connected in a network exchange the data through transfer lines called the “connections” for high-grade data processing. In each neuron, the data (i.e., the outputs of the neurons) sent from another neuron are subjected to simple processing such as multiplications or summations. Since the processing in the individual neurons and the processing of different neurons can be carried out in parallel, the neural network processing is advantageous in principle in its fast data processing. Since algorithms (or learnings) for setting the connection weights of the neurons for a desired data processing have been proposed, the data processing can be varied for the objects, as described in either pp. 533-536, “Learning representations by back-propagation errors.”, Nature 323-9 (1986a) by Rumelhart, D. E., Hinton, G. E. and Williams, R. J., or in 2nd Section of “Neural Network Processing” published by Sangyo Tosho and edited by Hideki Asou.
SUMMARY OF THE INVENTION
First of all, the operating principle of the neural network will be described in connection with two representative kinds: the multi-layered network and the Hopfield network. FIG.
2
(
a
) shows the structure of the multi-layered network, and FIG.
3
(
a
) shows the structure of the Hopfield network. Both of these networks are constructed of the connections of neurons. Here are used the terminology of “neurons”, which will be called the “nodes” or “processing elements”, as the case may be. The directions of the connection arrows indicate those of transferring the outputs of neurons. In the multi-layered network, as shown in FIG.
2
(
a
), the neurons are stacked in multiple layers so that the neuron outputs are transmitted in the direction from the input to output layers only. Input signals IN
1
, - - - , and IN
n
are inputted to the input layer, and output signals OUT
1
, - - - , and OUT
n
are outputted from the output layer. In the Hopfield network, on the other hand, the neuron outputs are fed back to an identical neuron and are transferred in two ways between arbitrary two neurons. The feedback to the identical neuron may be omitted.
FIGS.
2
(
b
) and FIG.
3
(
b
) show the processing principle to be accomplished in the neurons. This processing principle is similar in any network and will be described in connection with the multi-layered network with reference to FIG.
2
(
b
). FIG.
2
(
b
) shows a j-th neuron in the (S+1)th layer in an enlarged scale. This neuron is fed through the connection with the output values V
1s
, - - - , V
is
, - - - , and V
nss
of the neurons in the preceding layer, i.e., the S-th layer. Here, letters ns indicate the number of neurons in the S-th layer. In the neuron, the products V
1s
T
s
j1
, - - - , V
is
T
s
ji
, - - - , and V
nss
T
s
jns
of the inputted output values V
1s
, - - - , V
is
, - - - , and V
nss
of the neurons and the connection weights T
s
ji
and so on are calculated by means of a multiplier MT. Next, the sum of these products and an offset &THgr;
js+1
is calculated by means of an adder ADD. The offset &thgr;
js+1
may be omitted, as the case may be. Moreover, the result is inputted to a circuit D for nonlinear transfer function to obtain the output value V
js+1
of the neurons. The nonlinear transfer function circuit D has characteristics, as shown in FIGS.
2
(
c
) or
2
(
d
), and outputs an output g(x) for an input x. FIG.
2
(
c
) shows an example of the nonlinear transfer function for outputting a binary output g
1
or g
2
in dependence upon whether or not the input x exceeds a predetermined threshold value xth, and FIG.
2
(
d
) shows an example using a sigmoid function for issuing continuous outputs. The nonlinear transfer function circuit D is given other characteristics, if necessary. As the case may be, on the other hand, the circuit D may be given linear characteristics.
The processing principle described above is also similar in the Hopfield network, as shown in FIG.
3
(
b
). In the Hopfield network, however, not only the output of the neuron of the layer preceding by one but also the outputs of all neurons are inputted to one neuron. In the multi-layered network, as seen from FIGS.
2
(
a
) and
2
(
b
), one processing is ended by feeding the output values of the neurons of the input layer at first and by updating the output values of the neurons in the next layer and then by the outputs values of the neurons of the secondary next layer. In the Hopfield network of FIG.
3
(
a
), on the other hand, the output values of the individual neurons can be updated at suitable timings because of lack of any layer. In this Hopfleld network, all the neuron output values are suitably given, and their updating is continued till they come to an equilibrium state. The net work, in which the output values of all neurons are simultaneously updated, is called the “synchronized Hopfield network”, whereas the network, in which the output values are updated at arbitrary timings, are called the “unsynchronized Hopfield network” so that they are distinguished.
One method used for accomplishing the aforementioned neural networks has employed the software whereas the other the hardware. According to the method employing the software, the processing of neurons is carried out with a program written in computer languages so that the number or structure of the neurons can be easily changed. Since, however, the processing is sequentially performed, the former method is disadvantageous in that the data processing time is abruptly elongated for an increased number of neurons. In the Hopfield network using an n number of neurons, an n times of products have to be calculated for updating the output of one neuron. In order to update the output values of all neurons at least once, therefore, an n
2
times of products need to be calculated. In other words, the number of calculations will increase in the order of n
2
with the increase in the neuron number n. As a result, the data processing time will increase in the order of n
2
if the multiplications are sequentially accomplished.
According to the method employing the hardware, the processing time can be shortened by changing the neurons to be multiplied into the hardware. Another trial for speeding up the processing has been made by executing the processing in parallel-with a number of hardware neurons. If, however, the number of neurons is enlarged, the number of wiring lines acting as the signal lines between the neurons will increase in the order of n
2
, thus making it difficult to realize a large-scale network. The method of solving the wiring problem is exemplified on pp. 123-129 of Nikkei Microdevice, March, 1989, as will be described in principle in FIG.
4
.
FIG. 4
shows an example, in which a multi-layered network composed of three layers each having three neurons is constructed of analog neuro-processors ANP and SRAM. The ANP is made by integrating one multiplier Mt and one adder ADD of FIG.
2
(
b
) and a nonlinear transfer function circuit D into one chip. Another chip SRAM is stored with the connection weight belonging to each neuron. The neurons of different layers are connected through one signal line called the “analog common bus”. Since the neuron output value of an input layer is inputted from the outside
Itoh Kiyoo
Kawajiri Yoshiki
Kimura Katsutaka
Watanabe Takao
Coleman Eric
Hitachi , Ltd.
Mattingly Stanger & Malur, P.C.
LandOfFree
Semiconductor integrated circuit device comprising a memory... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Semiconductor integrated circuit device comprising a memory..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Semiconductor integrated circuit device comprising a memory... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2470914