Bayesian neural networks for optimization and control

Data processing: artificial intelligence – Neural network – Learning task

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C706S021000, C706S906000, C706S914000, C700S049000, C700S104000

Reexamination Certificate

active

06725208

ABSTRACT:

TECHNICAL FIELD OF THE INVENTION
The present invention pertains in general to neural networks for use with optimization of plants and, more particularly, to the use of Bayesian-trained neural networks for optimization and control.
BACKGROUND OF THE INVENTION
In general, modeling techniques for a plant involve the generation of some type of model. This is typically done utilizing a single model, either a linear model or a non-linear model. However, another technique of generating a model is to utilize a plurality of models that can be utilized to define a predicted vector output y(t) of values y
1
(t), y
2
(t), . . . , y
q
(t) as a function of an input vector x(t) of values x
1
(t), x
2
(t), . . . , x
p
(t). For the purposes of this application, a vector in the text shall be defined in bold and in equation form shall be defined with an overstrike arrow.
Given a set n of measured process data points:
D
=(
{right arrow over (x)}
1
, {right arrow over (y)}
1
)=(
{right arrow over (x)}
(1)
, {right arrow over (y)}
(1)
), (
{right arrow over (x)}
(2)
, {right arrow over (y)}
(2)
), . . . , (
{right arrow over (x)}
(n)
, {right arrow over (y)}
(n)
)  (1)
and assuming that an underlying mapping exists with the following relationship:
{right arrow over (y)}=F
(
{right arrow over (x)}
)  (2)
exists, a stochastical method for generating y(t) with respect to x(t) can be defined by averaging over many (non-linear) regression models F
(w)
. Given x(t), Fx(t) is approximated via a stochastic neural network training algorithm (non-linear regression) to the set of functions F
(w)
x(t), with “w” being the index for the number of models, by fitting to the dataset (x(t), y(t)) in the dataset D. However, this only provides a forward predictive model and does not facilitate the use thereof for optimization or control purposes.
SUMMARY OF THE INVENTION
The present invention disclosed and claimed herein comprises a method for optimizing a system in which a plant is provided for optimization. A training network having an input layer for receiving inputs to the plant, an output layer for outputting predicted outputs, and a hidden layer for storing a learned representation of the plant for mapping the input layer to the output layer is also provided. A method for training the neural network in utilizing the stochastical method of a Bayesian-type is provided.
In another aspect of the present invention, a method utilizing the network in an optimization mode in feedback from the output of the plant to the input of the plant to optimize the output with respect to the input via the stochastical Bayesian method is provided.


REFERENCES:
patent: 4992942 (1991-02-01), Bauerle et al.
patent: 5023045 (1991-06-01), Watanabe et al.
patent: 5159660 (1992-10-01), Lu et al.
patent: 5465321 (1995-11-01), Smyth
patent: 5513097 (1996-04-01), Gramckow et al.
patent: 5586221 (1996-12-01), Isik et al.
patent: 5659667 (1997-08-01), Buescher et al.
patent: 5680513 (1997-10-01), Hyland et al.
patent: 5781432 (1998-07-01), Keeler et al.
patent: 5796920 (1998-08-01), Hyland
patent: 5825646 (1998-10-01), Keeler et al.
patent: 5867386 (1999-02-01), Hoffberg et al.
patent: 5877954 (1999-03-01), Klimasauskas et al.
patent: 5901246 (1999-05-01), Hoffberg et al.
patent: 5933345 (1999-08-01), Martin et al.
patent: 6185470 (2001-02-01), Pado et al.
patent: 6212438 (2001-04-01), Reine
patent: 6216048 (2001-04-01), Keeler et al.
patent: 6278899 (2001-08-01), Piche et al.
patent: 6353766 (2002-03-01), Weinzierl
patent: 6363289 (2002-03-01), Keeler et al.
patent: 6381504 (2002-04-01), Havener et al.
patent: 6438430 (2002-08-01), Martin et al.
patent: 6438534 (2002-08-01), Sorgel
Carson et al.; “Simulation Optimization: Methods and Applications”. Proceedings of the 1997 Winter Simulation Conference, winter 1997, pp. 118-126.*
Lei, X.; “Bayesian Ying-Yang System and Theory as a Unified Statistical Learning Approach:(IV) Further Advances”. The 1998 IEEE International Joint Conference on Neural Networks Proceedings, vol. 2, May 1998, pp. 1275-1280.*
Yamamura et al.; “Reinforcement Learning with Knowledge by Using a Stochastic Gradient Method on a Bayesian Network”. The 1998 IEEE International Joint Conference on Neural Networks Proceedings, vol. 3, May 1998, pp. 2045-2050.*
Piche, S.; “Robustness of Feedforward Neural Networks”. International Joint Conference on Neural Networks, vol. 2, Jun. 1992, pp. 346-351.*
Piche, S.; “Steepest Descent Algorithms for Neural Network Controllers and Filters”. IEEE Transactions on Neural Networks, vol. 5, No. 2, Mar. 1994, pp. 198-212.*
Andrieu et al.; “Bayesian Blind Marginal Separation of Convolutively Mixed Discrete Sources”. Proceedings of the 1998 IEEE Signal Processing Society Workshop, Aug. 1998, pp. 43-52.*
Dong et al.; “A Self-Organizing Reasoning Neural Network”. 1994 IEEE International Conference on Neural Networks, vol. 3, Jun. 1994, pp., 1542-1545.*
Alhakeem et al.; “Decentralized Bayesian Detection with Feedback”. IEEE Transactions on Systems, Man and Cybernetics, vol. 26, Iss. 4, Jul. 1996, pp. 503-513.*
Cox et al.; “An Optimized Interaction Strategy for Bayesian Relevance Feedback”. 1998 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Jun. 1998, pp. 553-558.*
Hartman et al.; “Semi-local Units for Prediction”, IJCNN-91-Seattle International Joint Conference on Neural Networks, Jul. 1991, vol. 2, pp. 561-566.*
Piche, S.W.; “The Second Derivative of a Recurrent Network”. IEEE World Congress on Computational Intelligence, vol. 1, Jun. 1994, pp. 245-250.*
Lasdon, Leon; “Optimization Tutorial,” College of Business Administration, University of Texas, Austin; Jun., 1995.
Stuart Smith, Leon Lasdon; “Solving Large Sparse Nonlinear Programs Using GRG,” ORSA Journal on Computing, vol. 4, No. 1, Winter 1992, pp. 1-15.
Jeong-Woo Lee and Jun-Ho Oh; “Hybrid Learning of Mapping and Its Jacobian in Multilayer Neural Networks,” Neural Computation 9, 1997 Massachusetts Institute of Technology, pp. 937-958.
Jouko Lampinen and Arto Selonen; “Multilayer Perceptron Training with Inaccurate Derivative Information,” Proc. 1995 IEEE International Conference on neural Networks ICNN '95, Perth, WA, vol. 5, pp. 2811-2815, 1995.
T. P. Vogl, J. K. Mangis, A. K. Rigler, W. T. Zing, and D. L. Alkon; “Accelerating the Convergence of the Back-Propagation Method,” Biological Cybernetics, 59, pp. 275-263, 1988.
Kurt Hornik; “Multilayer Feedforward Networks are Universal Approximators,” Neural Networks, vol. 2, pp. 359-366, 1989.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Bayesian neural networks for optimization and control does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Bayesian neural networks for optimization and control, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Bayesian neural networks for optimization and control will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3271504

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.