System and method for training and using interconnected...

Data processing: artificial intelligence – Neural network – Structure

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C706S014000

Reexamination Certificate

active

06728691

ABSTRACT:

BACKGROUND OF THE INVENTION
1. Field of the Invention
The invention relates to an arrangement of computation elements which are connected to one another to form a computer system, a method for computer-aided determination of a dynamic response on which a dynamic process is based, and a method for computer-aided training of an arrangement of computation elements which are connected to one another.
2. Description of the Related Art
Pages 732-789 of Neural Networks: a Comprehensive Foundation, Second Edition, by S. Hayken, published by MacMillan College Publishing Company in 1999 describes the use of an arrangement of computation elements which are connected to one another for determining a dynamic response on which a dynamic process is based.
In general, a dynamic process is normally described by a state transition description, which is not visible to an observer of the dynamic process, and an output equation, which describes observable variables of the technical dynamic process. One such structure is shown in FIG.
2
.
A dynamic system
200
is subject to the influence of an external input variable u whose dimension can be predetermined, with an input variable ut at a time t being annotated ut:
u
t
&egr;
l
,
where l denotes a natural number.
The input variable u
t
at a time t causes a change in the dynamic process taking place in the dynamic system
200
.
An inner state s
t
(s
t
&egr;
m
) whose dimension m at a time t can be predetermined, cannot be observed by an observer of the dynamic system
200
.
Depending on the inner state s
t
and the input variable u
t
, a state transition is caused in the inner state s
t
of the dynamic process, and the state of the dynamic process changes to a subsequent state s
t+1
at a subsequent time t+1.
In this case:
s
t+1
=f
(
s
t
, u
t
).  (1)
where f( ) denotes a general mapping rule.
An output variable y
t
to time t, which can be observed by an observer of the dynamic system
200
, depends on the input variable u
t
and on the inner state s
t
.
The output variable Y
t
(Y
t
&egr;z,
1
n
) has a dimension n which can be predetermined.
The dependency of the output variable y
t
on the input variable u
t
and on the inner state s
t
of the dynamic process is expressed by the following general rule:
y
t
=g
(
s
t
, u
t
),  (2)
where g(.) denotes a general mapping rule.
In order to describe the dynamic system
200
, S. Hayken describes using an arrangement of computation elements, which are connected to one another, in the form of a neural network of neurons which are connected to one another. The connections between the neurons in the neural network are weighted. The weights in the neural network are combined in a parameter vector v.
An inner state of a dynamic system which is subject to a dynamic process is thus, in accordance with the following rule, dependent on the input variable u
t
and the inner state at the previous time st, and the parameter vector v:

s
t+1
=NN
(
v, s
t
, u
t
),  (3)
where NN( ) denotes a mapping rule which is predetermined by the neural network.
An arrangement of computation elements which is referred to as a Time Delay Recurrent Neural Network (TDRNN) is described in David E. Rumelhart et al., Parallel Distributed Processing, Explorations in the Microstructure of Cognition, Vol. 1: Foundations, A Bradford Book, The MIT Press, Cambridge, Mass., London, England, 1987. The known TDRNN is illustrated in
FIG. 5
as a neural network
500
which is convoluted over a finite number of times (the illustration shows 5·times, t−4, t−3, t−2, t−1, t).
The neural network
500
which is illustrated in
FIG. 5
has an input layer
501
with five partial input layers
521
,
522
,
523
,
524
and
525
, which each contain a number (which can be predetermined) of input computation elements, to which input variables u
t−4
, u
t−3
, u
t−2
, u
t−1
and u
t
can be applied at times t−4, t−3, t−2, t−1 and t which can be predetermined, that is to say time series values, which are described in the following text, with predetermined time steps.
Input computation elements, that is to say input neurons, are connected via variable connections to neurons in a number (which can be predetermined) of concealed layers
505
(the illustration shows 5 concealed layers). In this case, neurons in a first
531
, a second
532
, a third
533
, a fourth
534
and a fifth
535
concealed layer are respectively connected to neurons in the first
521
, the second
522
, the third
523
, the fourth
524
and the fifth
525
partial input layer.
The connections between the first
531
, the second
532
, the third
533
, the fourth
534
and the fifth
535
concealed layer and, respectively, the first
521
, the second
522
, the third
523
, the fourth
524
and the fifth
525
partial input layers are each the same. The weights of all the connections are each contained in a first connection matrix B
1
.
Furthermore, the neurons in the first concealed layer
531
are connected from their outputs to inputs of neurons in the second concealed layer
532
, in accordance with a structure which is governed by a second connection matrix A
1
. The neurons in the second concealed layer
532
are connected by their outputs to inputs of neurons in the third concealed layer
533
in accordance with a structure which is governed by the second connection matrix A
1
. The neurons in the third concealed layer
533
are connected by their outputs to inputs of neurons in the fourth concealed layer
534
in accordance with a structure which is governed by the second connection matrix A
1
. The neurons in the fourth concealed layer
534
are connected by their outputs to inputs of neurons in the fifth concealed layer
535
in accordance with a structure which is governed by the second connection matrix A
1
.
Respective “inner” states or “inner” system states s
t−4
, s
t−3
, s
t−2
, s
t−1
, and s
t
of a dynamic process which is described by the TDRNN are represented at five successive times t−4, t−3, t−2, t−1 and t in the concealed layers, the first concealed layer
531
, the second concealed layer
532
, the third concealed layer
533
, the fourth concealed layer
534
and the fifth concealed layer
535
.
The details in the indices in the respective layers each indicate the time t−4, t−3, t−2, t−1 and t to which the signals (u
t−4
, u
t−3
, u
t−2
, u
t−1
, u
t
) which can in each case be tapped off from or supplied to the outputs of the respective layer relate.
One output layer
520
has five partial output layers, a first partial output layer
541
, a second partial output layer
542
, a third partial output layer
543
, a fourth partial output layer
544
and a fifth output layer
545
. Neurons in the first partial output layer
541
are connected to neurons in the first concealed layer
531
in accordance with a structure which is governed by an output connection matrix C
1
. Neurons in the second partial output layer
542
are likewise connected to neurons in the second concealed layer
532
in accordance with the structure which is governed by the output connection matrix C
1
. Neurons in the third partial output layer
543
are connected to neurons in the third concealed layer
533
in accordance with the output connection matrix C
1
. Neurons in the fourth partial output layer
544
are connected to neurons in the fourth concealed layer
534
in accordance with the output connection matrix C
1
. Neurons in the fifth partial output layer
545
are connected to neurons in the fifth concealed layer
535
in accordance with the output connection matrix C
1
. The output variables for a respective time t−4, t−3, t−2, t−1, t can be tapped off (y
t−4
, Y
t−3
, Y
t−2
, y
t−
, y
t
) on the neurons in the partial output layers
541
,
542
,
543
,
544
and
545
.
The principle that equivalent

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

System and method for training and using interconnected... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with System and method for training and using interconnected..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and System and method for training and using interconnected... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3242951

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.