Data processing: artificial intelligence – Knowledge processing system
Reexamination Certificate
2000-04-07
2002-12-10
Starks, Jr., Wilbert L. (Department: 2122)
Data processing: artificial intelligence
Knowledge processing system
C706S023000, C187S247000
Reexamination Certificate
active
06493691
ABSTRACT:
This application contains a microfiche appendix submitted on 1 microfiche sheet and 69 frames.
BACKGROUND OF THE INVENTION
1. Field of the Invention
The invention is directed to an arrangement of computer elements connected to one another, methods for computer-supported determination of a dynamics that forms the basis of a dynamic process, and is also directed to a method for computer-supported training of an arrangement of computer elements connected to one another.
2. Description of the Related Art
The publication by S. Haykin, Neural Networks: A Comprehensive Foundation, discloses that an arrangement of computer elements connected to one another be utilized for determining a dynamics that forms the basis of a dynamic process.
In general, a dynamic process is usually described by a status transition description that is not visible to an observer of the dynamic process and by an output equation that describes observable quantities of the technical, dynamic process.
Such a structure is shown in FIG.
2
.
A dynamic system
200
is subject to the influence of an external input quantity u of a prescribable dimension, whereby an input quantity u
t
at a time t is referenced u
t
:
u
t
&egr;
1
,
whereby 1 references a natural number.
The input quantity u
t
at a time t causes a change of the dynamic process that sequences in the dynamic system
200
.
An internal condition s
t
(s
t
&egr;
M
) of a prescribable dimension m at a time t cannot be observed by an observer of the dynamic system
200
.
Dependent on the internal condition s
t
and on the input quantity u
t
, a status transition of the internal condition s
t
of the dynamic process is caused, and the status of the dynamic process switches into a successor status s
t+1
at a following point in time t+1.
The following is thereby valid:
s
t+1
=f
(
s
t
,u
t
) (1)
whereby f(.) references a general imaging rule.
An output quantity y
t
at a point in time t observable by a observer of the dynamic system
200
is dependent on the input quantity u
t
as well as on the internal status s
t
.
The output quantity y
t
(y
t
&egr;
n
) is a prescribable dimension n.
The dependency of the output quantity y
t
on the input quantity u
t
and on the internal status s
t
of the dynamic process is established by the following, general rule:
y
t
=g
(
s
t
,u
t
), (2)
whereby g(.) references a general imaging rule.
For describing the dynamic system
200
, the Haykin publication utilizes an arrangement of computer elements connected to one another in the form of a neural network of neurons connected to one another. The connections between the neurons of the neural network are weighted. The weightings of the neural network are combined in a parameter vector v.
An inner status of a dynamic system that underlies a dynamic process is thus dependent—according to the following rule—on the input quantity u
t
and on the internal status of the preceding point in time s
t
and on the parameter vector v:
s
t+1
=NN
(
v,s
t
,u
t
), (3)
whereby NN(.) references an imaging rule prescribed by the neural network.
The arrangement known from Haykin and referred to as Time Delay Recurrent Neural Network (TDRNN) is trained such in a training phase that a respective target quantity y
t
d
at a real dynamic system is respectively determined for an input quantity u
t
. The Tupel (input quantity, identified target quantity) is referred to as training datum. A plurality of such training data form a training data set.
The TDRNN is training with the training data set. An overview of various training methods can likewise be found in Haykin.
It must be emphasized at this point that only the output quantity y
t
at a time t of the dynamic system
200
can be recognized. The internal system status s
t
cannot be observed.
The following cost function E is usually minimized in the training phase:
E
=
1
T
⁢
∑
t
=
1
T
⁢
(
y
t
-
y
t
d
)
2
⟶
⁢
min
f
,
g
,
(
4
)
whereby T references a plurality of points in time taken into consideration.
The publication by Ackley et al., A Learning Algorithm for Boltzmann Machines discloses what is known as a neural auto-associator (see FIG.
3
).
The auto-associator
300
is composed of an input layer
301
, of three covered layers
302
,
303
,
304
as well as of an output layer
305
.
The input layer
301
as well as a first covered layer
302
and a second covered layer
303
form a unit with which a first non-linear coordinate transformation g can be implemented.
The second cover layer
303
together with a third covered layer
304
and the output layer
305
together form a second unit with which a second non-linear coordinate transformation h can be implemented.
This five-layer neural network
300
disclosed by Ackley et al. comprises the property that an input quantity x
t
is transformed onto an internal system status according to the first non-linear coordinate transformation g. Proceeding from the second covered layer
303
, upon employment of the third covered layer
304
toward the output layer
305
upon employment of the second non-linear coordinate transformation h, the internal system status is essentially transformed back onto the input quantity x
t
. The goal of this known structure is the imaging of the input quantity x
t
in a first status space X onto the internal status s
t
in a second status space S, whereby the dimension of the second status space Dim(S) should be smaller than the dimension of the first status space Dim(X) in order to achieve a data compression in the hidden layer of the neural network. The back-transformation into the first status space X corresponds to a decompression in this case.
The publication by H. Rehkugler et al., Neuronale Netze in der Ökonomie, Grundlagen und finanzwirtschaftliche Anwendungen also provides an overview of the fundamentals of neural networks and the possible applications of neural networks in the field of economics.
The known arrangements and methods particular exhibit the disadvantage that an identification or, respectively, modeling of a dynamic system that, in particular, is subject to substantial noise, i.e. whose structure is extremely complex in the time domain, is not possible.
SUMMARY OF THE INVENTION
The present invention is thus based on the problem of providing an arrangement of computer elements connected to one another with which a modeling of a dynamic system that is subject to noise is possible, the arrangement not being subject to the disadvantages of the known arrangements.
The invention is also based on the problem of providing a method for computer-supported determination of a dynamics that forms the basis of a dynamic process for dynamic processes that can be determined only within adequate precision with known methods.
The problems are solved by the arrangement as well as by the methods INSERT CLAIMS
2
-
26
The arrangement of computer elements connected to one another comprises the following features:
a) input computer elements to which time row values that respectively describe a status of a system at a point in time can be supplied;
b) transformation computer elements for the transformation of the time row values into a predetermined space, the transformation elements being connected to the input computer elements;
c) whereby the transformation computer elements are connected such to one another that transformed signals can be taken at the transformation computer elements, whereby at least three transformed signals relate to respectively successive points in time;
d) composite computer elements that are connected to respectively two transformation computer elements;
e) a first output computer element that is connected to the transformation computer elements, whereby an output signal can be taken at the first output computer element; and
f) a second output computer element that is connected to the composite computer elements and with whose employment a predetermined condition can be taken into consideration in a training of the arrangement.
A method fo
Neuneier Ralf
Zimmermann Hans-Georg
Siemens AG
Starks, Jr. Wilbert L.
LandOfFree
ASSEMBLY OF INTERCONNECTED COMPUTING ELEMENTS, METHOD FOR... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with ASSEMBLY OF INTERCONNECTED COMPUTING ELEMENTS, METHOD FOR..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and ASSEMBLY OF INTERCONNECTED COMPUTING ELEMENTS, METHOD FOR... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-2944771