Method and apparatus for operating a neural network with...

Data processing: artificial intelligence – Neural network – Learning task

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

C706S022000

Reexamination Certificate

active

06957203

ABSTRACT:
A neural network system is provided that models the system in a system model (12) with the output thereof providing a predicted output. This predicted output is modified or controlled by an output control (14). Input data is processed in a data preprocess step (10) to reconcile the data for input to the system model (12). Additionally, the error resulted from the reconciliation is input to an uncertainty model to predict the uncertainty in the predicted output. This is input to a decision processor (20) which is utilized to control the output control (14). The output control (14) is controlled to either vary the predicted output or to inhibit the predicted output whenever the output of the uncertainty model (18) exceeds a predetermined decision threshold, input by a decision threshold block (22). Additionally, a validity model (16) is also provided which represents the reliability or validity of the output as a function of the number of data points in a given data region during training of the system model (12). This predicts the confidence in the predicted output which is also input to the decision processor (20). The decision processor (20) therefore bases its decision on the predicted confidence and the predicted uncertainty. Additionally, the uncertainty output by the data preprocess block (10) can be utilized to train the system model (12).

REFERENCES:
patent: 4802103 (1989-01-01), Faggin et al.
patent: 4813077 (1989-03-01), Woods et al.
patent: 4872122 (1989-10-01), Altschuler et al.
patent: 4910691 (1990-03-01), Skeirik
patent: 4965742 (1990-10-01), Skeirik
patent: 5006992 (1991-04-01), Skeirik
patent: 5052043 (1991-09-01), Gaborski
patent: 5081651 (1992-01-01), Kubo
patent: 5111531 (1992-05-01), Grayson et al.
patent: 5113483 (1992-05-01), Keeler et al.
patent: 5121467 (1992-06-01), Skeirik
patent: 5140523 (1992-08-01), Frankel et al.
patent: 5150313 (1992-09-01), van den Engh et al.
patent: 5175797 (1992-12-01), Funabashi et al.
patent: 5255347 (1993-10-01), Matsuba et al.
patent: 5276771 (1994-01-01), Manukian et al.
patent: 5335291 (1994-08-01), Kramer et al.
patent: 5353207 (1994-10-01), Keeler et al.
patent: 5402519 (1995-03-01), Inoue et al.
patent: 5444820 (1995-08-01), Tzes et al.
patent: 5461699 (1995-10-01), Arbabi et al.
patent: 5467428 (1995-11-01), Ulug
patent: 5479573 (1995-12-01), Keeler et al.
patent: 5559690 (1996-09-01), Keeler et al.
patent: 5581459 (1996-12-01), Enbutsu et al.
patent: 5613041 (1997-03-01), Keeler et al.
patent: 5659667 (1997-08-01), Buescher et al.
patent: 5704011 (1997-12-01), Hansen et al.
patent: 5720003 (1998-02-01), Chiang et al.
patent: 5729661 (1998-03-01), Keeler et al.
patent: 5819006 (1998-10-01), Keeler et al.
patent: 6002839 (1999-12-01), Keeler et al.
patent: 6169980 (2001-01-01), Keeler et al.
patent: 6314414 (2001-11-01), Keeler et al.
patent: 6591254 (2003-07-01), Keeler et al.
patent: 0262647 (1988-04-01), None
patent: 0327268 (1989-08-01), None
patent: 0436916 (1991-07-01), None
patent: WO 94/12948 (1994-06-01), None
patent: WO 94/17482 (1994-08-01), None
patent: WO 94/17489 (1994-08-01), None
Phoha, Shashi; “Using. the National Information Infrastructure (NII) for Monitoring, Diagnostics and Prognostics of Operating Machinery,” IEEE Proceedings of the 35th Conference on Decision and Control, Kobe, Japan, pp. 2583-2587, Dec. 1996.
Hartman, Eric J., Keeler James D., Kowalski, Jacek M.; “Layered Neural Networks with Gaussian Hidden Units as Universal Approximations,” Neural Computation 2, 1990, Massachusetts Institute of Technology, pp. 210-215.
Hartman, Eric, Keeler, James D.; “Predicting the Future: Advantages of Semilocal Units,” Neural Computation 3, 1991, Massachusetts Institute of Technology, pp. 566-578.
Press, William H., Flannery, Brian P., Teukolsky, Saul A., Vetterling, William T.; Numerical Recipes: The Art of Scientific Computing, 1986, ch. 3, “Interpolation and Extrapolation,” pp. 77-101.
Serth, R.W., Heenan, W. A.; “Gross Error Detection and Data Reconciliation in Sleam-Metering Systems,” AIChE Journal, vol. 32, No. 5, May 1986, pp. 733-742.
Myung-Sub Roh et al.; Thermal Power Prediction of Nuclear Power Plant Using Neural Network and Parity Space Model, IEEE Transactions on Nuclear Science, vol. 38, No. 2, Apr. 1991, pp. 866-872.
Autere, Antti: On Correcting Systematic Errors Without Analyzing Them by Performing a Repetitive Task, IEEE / RSI International Workshop on Intelligent Robots and Systems IROS '91, Nov. 3-5, 1991, pp. 472-477.
Kimoto, Takashi et al.; Stock Market Prediction System with Modular Neural Networks, IEEE, Jun. 1990, pp. I-1 to I-6.
Dorronsoro et al.; “Neural Fraud Detection in Credit Card Operations,” IEEE Transactions on Neural Networks, vol. 8, No. 4, Jul. 1997, pp. 827-834.
Spenceley, S.E., Warren, J.R.; “The Intelligent Interface for Online Electronic Medical Records Using Temporal Data Mining.” IEEE Xplore, Proceedings of the 31st Annual Hawaii, International Conference on System Sciences. Jan. 1998, pp. 266-745.
Koutsougeras; “A feedforward neural network classifier model: multiple classes, confidence output values, and implementation,” International Journal of Pattern Recognition and Artificial Intelligence Ed. World Scientific Publishing Co., Oct. 1992, vol. 6, No. 4, pp. 539-569.
Lapedes, Alan, Farber, Robert; “How Neural Nets Work,” American Inst. of Physics, 1988, pp. 442-457.
Weigend, Andreas S., Huberman, Bernardo A., Rumelhart, David E; “Predicting the Future: A Connectinist Approach,” Stanford Univeristy, Stanford-PDP-90-01/PARC-SSL-90-20, Apr. 1990.
Rumelhart, D.E., Hinton, G.E., Williams. R.J.; “Learning Internal Representations by error Propagation,” Parallel Distributed Processing, vol. 1, 1986.
Rander, P.W., Unnikrishnan, K.P.; “Learning the Time-Delay Characteristics in a Neural Network,” ICAAADDP-92-IEEE International Conference on Acoustics, Speech and Signal Processing, Mar. 23, 1992, vol. 2, pp. 285-288.
Tam, David C., Perkel, Donald H.; “A Model for Temporal Correlation of biological Neuronal Spike Trains,” IJCNN INI. Joint Conference of Neural Networks, Dept. of Physiology and Biophysics, University of California, pp. I-781-I-786.
Levin, Esther, Gewirtzman, Raanan, Inbar, Gideon F.; “Neural Network Architecture for Adaptive System Modeling and Control,” Neural Networks, 4(1991) No. 2, pp. 185-191.
Troudet, T., Garg, S., Mattern, D. Merrill, W.; “Towards Practical Control Design Using Neural Computation,” ICJNN-91-Seattle-International Joint Conference on Neural Networks, vol. 2, pp. 675-681, Jul. 8, 1991.
Beerhold, J.R., Jansen, M., Eckmiller, R.; “Pulse-Processing Neural Net Hardware with Selectable Topology and Adaptive Weights and Delays,” IJCNN International Joint Conference on Neural Networks, Jun. 17, 1990, vol. 2, pp. 569-574.
Haffner, Patrick, Franzini, Michael, Waibel, Alex; “Integrating Time Alignment and Neural Networks for High performance Continuous Speech Recognition,” ICASSP 91, sponsored by the Inst. of electrical and Electronics Engineers, Signal Processing Society, 1991 Int. Conf. on Acoustics, Speech and Signal Processing, May 14-17, 1991, pp. 105-108.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Method and apparatus for operating a neural network with... does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Method and apparatus for operating a neural network with..., we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Method and apparatus for operating a neural network with... will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3446260

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.