Data processing: artificial intelligence – Neural network – Learning task
Reexamination Certificate
2007-12-25
2007-12-25
Starks, Wilbert (Department: 2129)
Data processing: artificial intelligence
Neural network
Learning task
C706S013000
Reexamination Certificate
active
10112069
ABSTRACT:
A method is described for improving the prediction accuracy and generalization performance of artificial neural network models in presence of input-output example data containing instrumental noise and/or measurement errors, the presence of noise and/or errors in the input-output example data used for training the network models create difficulties in learning accurately the nonlinear relationships existing between the inputs and the outputs, to effectively learn the noisy relationships, the methodology envisages creation of a large-sized noise-superimposed sample input-output dataset using computer simulations, here, a specific amount of Gaussian noise is added to each input/output variable in the example set and the enlarged sample data set created thereby is used as the training set for constructing the artificial neural network model, the amount of noise to be added is specific to an input/output variable and its optimal value is determined using a stochastic search and optimization technique, namely, genetic algorithms, the network trained on the noise-superimposed enlarged training set shows significant improvements in its prediction accuracy and generalization performance, the invented methodology is illustrated by its successful application to the example data comprising instrumental errors and/or measurement noise from an industrial polymerization reactor and a continuous stirred tank reactor (CSTR).
REFERENCES:
patent: 4912651 (1990-03-01), Wood et al.
patent: 5140530 (1992-08-01), Guha et al.
patent: 5265192 (1993-11-01), McCormack
patent: 6353816 (2002-03-01), Tsukimoto
patent: 6449603 (2002-09-01), Hunter
patent: 6678669 (2004-01-01), Lapointe et al.
“Learning Neural Networks with noisy inputs using the errors-in-variables approach”, Van Gorp J., Schoukens J., Pintelon R., Neural Networks, IEEE Transactions on, Mar. 2000, vol. 11, issue 2, pp. 402-414.
“Creating Artificial Neural Networks that generalize”, Jocelyn Sietsma & Robert J. F. Dow, Neural Networks (USA), vol. 4, No. 1, pp. 67-79, (1991).
“Noise Injection into Inputs in Back-Propagation Learning”, Kiyotoshi Matsuoka, Systems, Man and Cybernetics, IEEE Transactions on, May-Jun. 1992, vol. 22, Issue 3, pp. 436-440.
“A Global Gradient-Noise Covariance Expression for Stationary Real Gaussian Inputs”, P. Edgar An, Martin Brown, and C. J. Harris, Neural Networks, IEEE Transactions on, vol. 6, Issue 6, Nov. 1995 pp. 1549-1551.
“Global Optimisation by Evolutionary Algorithms”, Xin Yao, Parallel Algorithms/Architecture Synthesis, 1997. Proceedings. Second Aizu International Mar. 17-21, 1997, pp. 282-291.
The effects of adding noise during backpropagation training on a generalization performance, Guozhong An, Neural Computation (USA), vol. 8, No. 3, pp. 643-674, Apr. 1, 1996.
“Artificial neural network feedforward/feedback control of a batch polymerization reactor”, Shahrokhi, N.; Pishvaie, M.R.; American Control Conference, 1998. Proceedings of the 1998, vol. 6, Jun. 24-26, 1998 pp. 3391-3395.
“On-line re-optimisation control of a batch polymerisation reactor based on a hybrid recurrent neural network model”,Yuan Tian; Jie Zhang; Morris, J.; American Control Conference, 2001. Proceedings of the 2001, vol. 1, Jun. 25-27, 2001 pp. 350-355.
Poggio, T., et al., “Regularization Algorithms for Learning that are Equivalent to Multilayer Networks,” Science, vol. 247, pp. 978-982 (1990).
Rumelhart, D.E., et al., “Learning Representations by Back-Propagating Errors,” Nature, vol. 323, pp. 533-536 (Oct. 1986).
Van Gorp, J., et al., “The Errors-in-Variables Cost Function for Learning Neural Networks with Noisy Inputs,” Intelligent Engineering Systems Through Artificial Neural Networks, vol. 8, pp. 141-146 (1998).
Bishop, C.M., “Training with Noise is Equivalent to Tikhonov Regularization,” Neural Computation, vol. 7, pp. 108-116 (1995).
Goldberg, D.E., “A Gentle Introduction to Genetic Algorithms,” Genetic Algorithms in Search, Optimization, and Machine Learning; Addison-Wesley, New York, 1989, Holland, J. Adaptation in Natural and Artificial Systems, University of Michigan Press, Ann Arbor, MI, USA (pp. 1-7).
Tambe, S.S., Kulkarni, B.D., Deshpande, P.B., Elements of Artificial Neural Networks with Selected Applications in Chemical Engineering, and Chemical & Biological Sciences, Simulation & Advanced Controls, Inc: Louisville, USA, 1996 (pp. 19-23).
Dheshmukh Sanjay Vasantrao
Kulkarni Bhaskar Dattatray
Lonari Jayaram Budhaji
Ravichandran Sivaraman
Shenoy Bhavanishankar
Council of Scientific & Industrial Research
Starks Wilbert
Sughrue Mion Pllc.
Tran Mai T.
LandOfFree
Performance of artificial neural network models in the... does not yet have a rating. At this time, there are no reviews or comments for this patent.
If you have personal experience with Performance of artificial neural network models in the..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Performance of artificial neural network models in the... will most certainly appreciate the feedback.
Profile ID: LFUS-PAI-O-3885739