Neural networks for prediction and control

Data processing: artificial intelligence – Neural network – Learning task

Reexamination Certificate

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

Reexamination Certificate

active

07395251

ABSTRACT:
Neural networks for optimal estimation (including prediction) and/or control involve an execution step and a learning step, and are characterized by the learning step being performed by neural computations. The set of learning rules cause the circuit's connection strengths to learn to approximate the optimal estimation and/or control function that minimizes estimation error and/or a measure of control cost. The classical Kalman filter and the classical Kalman optimal controller are important examples of such an optimal estimation and/or control function. The circuit uses only a stream of noisy measurements to infer relevant properties of the external dynamical system, learn the optimal estimation and/or control function, and apply its learning of this optimal function to input data streams in an online manner. In this way, the circuit simultaneously learns and generates estimates and/or control output signals that are optimal, given the network's current state of learning.

REFERENCES:
patent: 5408424 (1995-04-01), Lo
patent: 5877954 (1999-03-01), Klimasauskas et al.
patent: 5956702 (1999-09-01), Matsuoka et al.
patent: 5963929 (1999-10-01), Lo
patent: 6272480 (2001-08-01), Tresp et al.
patent: 6278962 (2001-08-01), Klimasauskas et al.
patent: 6748098 (2004-06-01), Rosenfeld
patent: 6978182 (2005-12-01), Mazar et al.
patent: 7009511 (2006-03-01), Mazar et al.
patent: 7065409 (2006-06-01), Mazar
patent: 7076091 (2006-07-01), Rosenfeld
patent: 7127300 (2006-10-01), Mazar et al.
patent: 7289761 (2007-10-01), Mazar
patent: 7292139 (2007-11-01), Mazar et al.
Jitter and error performance analysis of QPR-TCM and neural network equivalent systems over moblile satellite channels Ucan, O.N.; Personal, Indoor and Mobile Radio Communications, 1996. PIMRC'96., Seventh IEEE International Symposium on Vol. 2, Oct. 15-18, 1996 pp. 457-461 vol. 2 Digital Object Identifier 10.1109/PIMRC.1996.567436.
Performance of trellis coded M-PSK and neural network equivalent systems over partial response-fading channels with imperfect phase reference Ucan, O.N.; Universal Personal Communications, 1996. Record., 1996 5th IEEE International Conference on Vol. 2, Sep. 29-Oct. 2, 1996 pp. 528-532 vol. 2 Digital Object Identifier 10.1109/ICUPC.1996.56.
Jitter and error performance analysis of QPR-TCM and neural network equivalent systems over mobile satellite channels Ucan, O.N.; Personal, Indoor and Mobile Radio Communications, 1996. PIMRC'96., Seventh IEEE International Symposium on vol. 2, Oct. 15-18, 1996 pp. 457-461 vol. 2 Digital Object Identifier 10.1109/PIMRC.1996.567436.
Performance of trellis coded M-PSK and neural network equivalent systems over partial response-fading channels with imperfect phase referenceUcan, O.N.; Universal Personal Communications, 1996. Record., 1996 5th IEEE International Conference on vol. 2, Sep. 29-Oct. 2, 1996 pp. 528-532 Digital Object Identifier 10.1109/ICUPC.1996.562629.
A locally quadratic model of the motion estimation error criterion function and its application to subpixel interpolations Xiaoming Li; Gonzales, C.; Circuits and Systems for Video Technology, IEEE Transactions on vol. 6, Issue 1, Feb. 1996 pp. 118-122 Digital Object Identifier 10.1109/76.486427.
Subband video coding with smooth motion compensation Fuldseth, A.; Ramstad, T.A.; Acoustics, Speech, and Signal Processing, 1996. ICASSP-96. Conference Proceedings., 1996 IEEE International Conference on vol. 4, May 7-10, 1996 pp. 2331-2334 vol. 4 Digital Object Identifier 10.1109/ICASSP.1996.547749.
R. Linsker; IBM Research Division, T.J. Watson Research Center, Neural Computation 4, 691-702 (1992); Massachusetts Institute of Technology: Local Synaptic Learning Rules Suffice to Maximize Mutual Information in a Linear Network.
R. J. Williams, College of Computer Science, Northeastern University, National Science Foundation; pp. 1-6; Training Recurrent Networks Using the Extended Kalman Filter.
I. Szita, et al.; Neural Computation 16, 491-499 (2004) Massachusetts Institute of Technology; Kalman Filter Control Embedded into the Reinforcement Learning Framework.
G. Szirtes, et al.; Science Direct; Neurocomputing; Neural Kalman Filter; p. 1-7.
I. Rivals, et al.; Neurocomputing 20 (1-3): 279-294 (1998); A Recursive algorithm based on the extended Kalman filter for the training of feedforward neural models.
R. Linsker; Science Direct; Neural Networks 18 (2005) 261-265; Improved local learning rule for information maximization and related applications.
R. Kalman, Journal of Basic Engineering; Mar. 1960 pp. 35-45: A New approach to Linear Filtering and Prediction Problems.
S. Beckor, et al.: Department of Compter Science; Univeristy of Toronto; Nature vol. 355; Jan. 1992; pp. 161-163: Self-organizing neural network that discovers surfaces in random-dot stereograms.
G. Puskorius, et al.; Kalman Filtering and Neural Networks; 2001 J. Wiley and Sons, Inc.: Parameter-Based Kalman Filter Training: Theory and Implementation pp. 23-66.
S. Singhal, et al.: 1989, Bell Communications Research, Inc.: pp. 133-141: Training Multilayer perceptrons with the extended Kalman Algorithm.

LandOfFree

Say what you really think

Search LandOfFree.com for the USA inventors and patents. Rate them and share your experience with other people.

Rating

Neural networks for prediction and control does not yet have a rating. At this time, there are no reviews or comments for this patent.

If you have personal experience with Neural networks for prediction and control, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Neural networks for prediction and control will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFUS-PAI-O-3969095

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.